Three Academics Speak Out on Bad Medical Studies
My last post on bad medical studies brought notable comments from academic physicians
First read my latest on why there are so many bad medical studies.
Two of the three commenters nudged me to consider a more optimistic take.
One of these came from Dan Matlock, a professor of medicine at the University of Colorado. You should first know Dan is one of the smartest and kindest academics I have met.
He wrote, publicly, in the comments section, that incentives to publish—even bad studies—was part of the problem, but I should consider a more optimistic take: Namely, that EVERY study has limitations. Dan’s main problem is the outsized conclusions of medical studies.
He pushed back on my criticism of making causal links from observational data by noting the fact that it took 40 years to establish a link between smoking and lung cancer—none of which came from randomized trials. He cautioned me not “to throw the baby out with the bath water.”
Dan also noted that before we invest millions of dollars in well-designed RCTs, it might be wise to do some less costly observational studies first.
My three responses:
First is to thank Dan for writing publicly. It’s quite an honor to have legit academics comment on my Substack.
Regarding outsized conclusions, I could not agree more. In fact, I was a co-author on a paper which found an incredible amount of spin in the cardiac literature. Spin is defined by language designed to distract the reader from a non-significant primary endpoint. So, yes, I agree that bad conclusions are a problem.
But… if conclusions of bad studies were honest, they would often read something like this: “our methodological limitations preclude making any useful conclusions.”
On the matter of using observational studies to guide trials, I would suggest that since it is hard to know which non-random comparison studies are true, it might be equally possible that observational data might lead us to waste money doing trials on things we shouldn’t study.
The next two responses are anonymous.
The next comment comes from an academic cardiologist who has a stellar record of publication—and is known for making cautious conclusions when leading observational research. I have the utmost respect for this person.
The email reads:
Hi John—I completely agree with you that it is far too often that terrible retrospective observational studies such as the one below are used to draw causal inferences.
BUT, re the why—can I suggest a less cynical reason? We want the answers to help our patients. Good studies are very hard to do—extraction is a great example, would be very hard to do an RCT. So people try to do the best that seems possible using the tools available, such as administrative datasets, despite all the flaws.
This does not of course excuse using bad data to make the wrong conclusions. Just maybe a little less cynical? Your reasons are likely also true.
My response:
While I believe that academics have--as one goal—the discovery of answers to help our patients. The problem is that there are other goals that influence behavior as well. (As it is in private practice. Look no further than how productivity-based compensation influences physician behavior.)
My main pushback is that I don’t think attributing outcomes to incentives is cynical.
I see the effect of incentives as normal human behavior. If I were assigned to fix the problem of too many bad medical studies I would not try to influence human researchers to stop doing bad studies, I would change the incentive structures.
The third commenter requires a trigger warning.
If you hold to any optimism about medical science, you might want to avoid this section. It comes from another accomplished university physician embedded deep in the academic world.
John, your latest piece misses the mark. It isn't incentives alone. The biggest issue is rampant incompetence.
Spend more time dealing with reviewers, AEs (associate editors), etc.
There are actually few incentives to publish any more for faculty. Ironically, residents and students have to publish to get into training programs, Faculty mostly don't have grants and very little to no income tied to publishing.
It is an ego and/or internal desire driven exercise for 80%. Problem is that most of those people are not just wrongly incentivized (ego, pride, fame) but they lack competence—and there are no checks on incompetence.
I emailed back: rampant incompetence of whom, researchers or journal editors?
All of the above. Associate editors are the most powerful people. At all journals they are motivated by their own fame, favors, cachet of authors submitting, etc.
At lower tier journals, AEs have less sway because they need to fill the pages (not zero sway but less).
But regardless, competence is not very high. Most of them can't see fraud staring at them in the face. They don't understand it when explained.
Few have understanding of even basic statistics. Many don't understand basic epidemiology.
Gosh.
I don’t know how to answer that last comment other than to say that when we read papers, perhaps we should turn our Bayesian-prior-belief knobs a click or two to the skeptical side—away from optimism.
My friends, I guess the only way to sum this up is to know that science-done-by-humans is hard and slow.
And perhaps going slower and making fewer mistakes actually leads to faster progress.
I remain skeptical—not cynical.
Your thoughts have been transformational in my family medicine training. Thank you for teaching the art of skepticism.
Hard not to be cynical at times. But I’m hopeful that the collective desire for truth will, with persistent effort, steer research practices, incentive structures, and clinical practice guidelines in better directions.
I really enjoy your articles and others responses to them.
It is definitely thought provoking to read your thoughts and comments by others.
I am always open to learning through listening and introspection...