Trials That Attempt to Expand Treatment Indications but Do Not Increase Knowledge
My latest column is up over at theheart.org | Medscape Cardiology
The first question I ask when looking at a medical study is its purpose. That is: was the study done to sort out an important question or was it done for marketing of an intervention.
Two studies presented at the recent American College of Cardiology meeting serve as excellent examples of the misplaced incentives in medical science.
In the column, which is written for a medical reader, I discuss the DapaTAVI and STRIDE trials.
In DapaTAVI, investigators randomized patients after a transcatheter aortic valve implantation or TAVI. One group got the SGLT2i dapagliflozin; the other group got standard care. (No placebo tablet).
Patients who get TAVI are older and have multiple conditions that predispose to fluid buildup requiring diuretic therapy.
After one year, 15% of patients in the dapagliflozin group had a primary outcome event of death or heart failure event (urgent visit or hospitalization) vs 20.1% in the control arm. That 28% lower rate reached statistical significance, rendering the trial a win for the SGLT2i dapagliflozin.
The problems I had with the trial were:
The control arm did not get a placebo tablet. That means patients and doctors knew the treatment assignment.
The driver of the positive primary endpoint was not death but a HF event. That’s a problem because the decision to have a patient come in for an “urgent visit” or hospitalize a patient is made by a clinician. Knowledge of treatment assignment could bias the decision. For instance, one option for patients with HF symptoms might be to have the patient take an extra oral diuretic tablet at home, which would not count as an endpoint event.
The other problem with using “HF hospitalization” is that older patients can be admitted to the hospital for many reasons. The authors don’t report total hospitalizations, so we can’t know whether reducing HF hospitalizations reduces the burden of healthcare.
My main criticism however is the basic idea of the trial. We know that SGLT2i drugs are fairly potent diuretic drugs—which means they pull fluid from the body, via the kidneys.
I argue in the column that you could give an SGLT2i drug to any patient after any cardiac procedure and it would likely lead to fewer “HF events”. They used TAVI in the study, but it could be after an ablation, stent or bypass surgery.
DapaTAVI therefore should not be used to expand the indication for SGLT2i drugs. Yet more than 40 news outlets covered the trial, nearly all with positive headlines.
The STRIDE trial was worse. This was an industry-sponsored and -run trial of semaglutide vs placebo in patients with peripheral artery disease who have claudication (pain in the muscle with exercise due to inadequate blood supply.)
The authors measured walking time on a treadmill as a primary endpoint. As most of you can imagine, walking time on treadmill is highly susceptible to coaching and motivation. Who hasn’t run an extra 400 with prompting? Proper blinding is mandatory with such an endpoint.
And herein lies the fatal flaw of the study. Patients on semaglutide will surely know they are on it; they lose weight, have easy satiety, and often nausea.
The difference in walking time was modest. Not even a 100 feet more compared to placebo. Yet it reached statistical significance, rendering STRIDE a positive study and publication in the influential LANCET journal.
The increase in walking time is similar to the old and generic drug cilostazol. Yet more than 35 news outlets covered the trial with positive headlines.
Yet the modest increase in exercise time on semaglutide could have been due to placebo effect. One way to know if there is unblinding in a trial would be a simple blinding test. This was not done in STRIDE.
Conclusion:
Evidence-based practice relies on clinical trials. I laud investigators who do trials.
Yet it saddens me that trials-designed-to-be-positive make it to the big stage of a meeting and garner publication in major medical journals.
The upstream causes of this problem are complex. One cause is surely profit motive. STRIDE is a good example of an industry-designed trial made to expand an indication for their drug. In my opinion, it does not get reach that level.
The Spanish government funded DapaTAVI. Here, the upstream problem involves doing trials that deliver positive results. I would have bet a lot on DapaTAVI being positive. When it comes back modestly positive only for a subjective bias-prone endpoint, medical science is not advanced.
I don’t have a solution to this problem—other than to sharpen critical appraisal skills so as not to be bamboozled.
JMM
Clinical trial gamesmanship has become a bigger problem over my lifetime as a doctor. I think part of the problem is lower bar for approval at FDA. When they lower the standard to approve new drug or device, clinical trials will lower endpoint bar to meet it. In 1990s, FDA required ICDs demonstrate reduction in total mortality for approval. Now, CCM and phrenic nerve stimulation for sleep apnea approve with much lower standard.
Wise commentary by J. Mandrola. Science must get to the root of the problem to understand it and define treatment strategies. Trials are experimentation, sometimes necessary and useful, but if they do not lead us to an understanding of what is underlying they are of reduced usefulness. Bravo once again, J.M.