Why Are There So Many Bad Medical Studies?
A colleague gave me interesting feedback on last Friday’s This Week in Cardiology podcast.
He asked: John, why do journals publish so many terrible studies?
It made me Stop and Think.
The feedback concerned my first topic on last week’s podcast. I covered an extremely flawed observational study that evaluated the use of lead extraction in patients who had cardiac devices (pacemakers and ICDs) and bacterial infection in the heart (endocarditis).
(Note: this time it was lead extraction but It could have been any number of comparisons of people who received or didn’t receive something. Eg: Vitamin D, blueberries, saunas, add-on-surgical-thing-that-can-be-billed-for, etc).
In the case of cardiac devices…literally everyone in cardiology agrees that if a patient with a cardiac device gets an infection in the heart from a device, the best treatment is to remove all hardware. That’s because it is near impossible to clear bacteria without removal of the hardware.
So called lead extraction, however, can be a difficult procedure. The body tends to scar down leads and extracting them requires special tools. In the process, it is possible to tear blood vessels and damage the heart.
In short, lead extraction confers risk. Yet, in major referral centers, there are doctors who, over years, have become expert at lead extraction. In an expert’s hands, lead extraction is a lot less risky.
Proponents of lead extraction feel that there is substantial underuse of this procedure. They argue that patients die unnecessarily because doctors either a) don’t recognize the need for extraction, or b) don’t have access to lead extraction centers.
Yet the decision to extract leads can be tricky. Many of these patients have oodles of other illnesses. Some are at end-of-life. Say a 90-year-old non-mobile person who has dementia and lives in a nursing home.
The study that prompted my colleague to ask about the state of medical evidence drew from a “Readmissions Database.” The authors identified patients who had lead extraction vs those who did not. They reported that patients who did not have lead extraction had a much greater probability of dying.
Both they and authors of an accompanying editorial made causal inferences that this data showed that doing more lead extraction would save lives. That sounds reasonable. And maybe it is correct.
But.
While I believe that lead extraction—especially when done by experts—is a wonderful treatment, and I also suspect that it is underused in places without access to expert centers, it is wrong to use non-random comparisons to draw these conclusions and promote lead extraction.
Doing so hurts our credibility as a science-based or evidence-based profession.
As I have written previously in this newsletter, non-random comparisons are marred by bias. These sorts of studies led us to treat PVCs in patients after heart attack. And we ended up killing more patients than we helped.
That is because, in non-random comparisons, a clinician chose to either do or not do lead extraction. He or she used all manner of variables to make that decision. Some of these factors make into a database and can be adjusted for, but some do not.
In studies like this we can never know if it was the lead extraction that made patients die less often, or whether it was other factors—such as dementia or immobility or a host of other factors. It is highly probable that healthier patients got lead extraction.
We call this confounding. But that was not the only weaknesses of this study. Another was that the database only covered 22 of 50 states. How is that representative? A: it is not.
This paper is not an outlier. Bad studies are everywhere. The pandemic made it worse, but this problem came before the pandemic.
Back to the main question
My colleague wants to know why there are so many of these flawed studies?
I can only offer opinions.
Having closely looked at the medical literature for the past decade, my best guess is incentives.
Incentives: Journals get attention. Authors get publications, which help them advance in the field. Industry can also win from these studies. In this case, the flawed analysis promotes lead extraction, and the companies that make extraction tools stand to profit.
Legacy media also wins because they can cover these studies and garner attention. Rare is the presence of a news article saying that the study they covered is too flawed to draw conclusions.
My take-home is that there is little that consumers can do to stop the publication of bad studies. Incentives are just too strong.
So I make two conclusions: one is that consumers of the medical literature need to bolster their appraisal skills. Do not rely on the authors’ or editorialists’ conclusions. The incentive structure of the medical literature must be considered when reading a paper.
My other conclusion is a word to the scientists who publish these studies. And that is to consider what flawed papers do to trust in the scientific process.
While no study is perfect, there are analyses and conclusions so broken that the profession would be better off if scientists simply resisted the urge to do these analyses.
excellent article!!
Why so many bad studies? One word: $.