Simple Rules to Understand Medical Claims
A blogpost from a prominent scientist made it seem near impossible to sort out medical claims. I don't see it that way.
Professor Andrew Gelman is a statistics and political science professor. He writes a popular blog where he is outspoken—especially about bogus studies. Few public intellectuals have more credentials than Gelman.
That’s why I was drawn to his short post regarding the difficulty in digesting research claims. The two claims in question were a) Vitamin D and COVID-19 and b) fish oil and prostate cancer.
Two readers emailed Gelman to inquire about the possible treatment benefits of Vitamin D for SARS-CoV-2 infection and the higher risk of prostate cancer from fish oil supplements.
The writers of these emails cited five studies, three to support the potential treatment benefits of Vitamin D for COVID-19 and two to support the fish oil and prostate cancer link.
Gelman’s response surprised me:
I have no idea what to think about any of these papers. The medical literature is so huge that it often seems hopeless to interpret any single article or even subliterature.
In the age of the Internet (and now ChatGPT) that seemed overly defeatist. Gelman goes on to write:
An alternative approach is to look for trusted sources on the internet, but that’s not always so helpful either. For example, when I google *cleveland clinic vitamin d covid*, the first hit is an article, Can Vitamin D Prevent COVID-19?, which sounds relevant but then I notice that the date is 18 May 2020. Lots has been learned about covid since then, no?? I’m not trying to slam the Cleveland Clinic here, just saying that it’s hard to know where to look. I trust my doctor, which is fine, but (a) not everyone has a primary care doctor, and (b) in any case, doctors need to get their information from somewhere too.
Here was the professor’s conclusion:
I don’t know what is currently considered the best way to summarize the state of medical knowledge on any given topic.
At the risk of aggravating Professor Gelman, who is clearly many-fold smarter than me, I would offer a less nihilistic approach to evaluating medical claims on the Internet.
The first step involves Bayesian thinking. That is… the consideration of prior beliefs.
The most important priors when it comes to medical claims are simple: most things don’t work. Most simple answer answers are wrong. Humans are complex. Diseases are complex. Single causes of complex diseases like cancer should be approached with great skepticism.
One of the studies sent to Gelman was a small trial finding that Vitamin D effectively treated COVID-19. The single-center open-label study enrolled 76 patients in early 2020. Even if this were the only study available, the evidence is not strong enough to move our prior beliefs that most simple things (like a Vitamin D tablet) do not work.
The next step is a simple search—which reveals two large randomized controlled trials of Vitamin D treatment for COVID-19, one published in JAMA and the other in the BMJ. Both were null.
You can use the same strategy for evaluating the claim that fish oil supplementation leads to higher rates of prostate cancer.
Start with prior beliefs. How is it possible that one exposure increases the rate of a disease that mostly affects older men? Answer: it’s not very possible. And even if fish oil marginally increased the rate of one disease, there are thousands of other diseases one can get. (← that is the problem with screening.)
Now consider the claims linked in Gelman’s email.
While both studies stemmed from randomized trials neither were primary analyses. These were association studies using data from the main trial, and therefore, we should be cautious in making causal claims.
Now go to Google. This reveals two large randomized controlled trials of fish oil vs placebo therapy.
The ASCEND trial of n-3 fatty acids in 15k patients with diabetes found “no significant between-group differences in the incidence of fatal or nonfatal cancer either overall or at any particular body site.” And I would add no difference in all-cause death
The VITAL trial included cancer as a primary endpoint. More than 25k patients were randomized. The conclusions: “Supplementation with n−3 fatty acids did not result in a lower incidence of major cardiovascular events or cancer than placebo.”
I am not arguing that every claim is simple. My case is that the evaluation process is slightly less daunting than Professor Gelman seems to infer.
Of course, medical science can be complicated. Content expertise can be important. Google and AI cannot replace the wisdom of experienced clinicians.
But that does not mean we should take the attitude: “I have no idea what to think about these papers.”
I offer five basic rules of thumb that help in understanding medical claims:
Hold pessimistic priors
Be super-cautious about causal inferences from nonrandom observational comparisons
Look for big randomized controlled trials—and focus on their primary analyses
Know that stuff that really works is usually obvious (antibiotics for bacterial infection; AEDs to convert VF)
Respect uncertainty. Stay humble about most “positive” claims
And always…take time to Stop and Think.
JMM
Thanks for reading. I have been shocked by the support this newsletter has received. Thank you.
Well, sure it's simple if you have many years of medical school and a deep seated expertise in the topic. For someone who does not know what a "randomized trial" is, it's a little less simple. Your post is the classic "an expert thinks things in their field are simple to understand because they're an expert not because they're actually simple" kind of comment
i use to date a bayesian. but we both had to become episcopalian to impress her mother.
ive adjusted my priors to no longer date people w mothers, & expect contentment in my relationships. so far it has been lonely: but also no relationships have failed. look, not all data is useful. nor all dating. _JC