I often say in lectures that evidence is what separates clinicians from palm readers.
Evidence tells us if our actions actually help people. You might push back, and say, come on, a clinician who has treated patients for years can get a sense of what works and what does not. Experience counts.
Let me tell you a story that shows how this isn’t always so.
When I started medical school in the 1980s, cardiologists used drugs to suppress premature beats (called PVCs or premature ventricular complexes) when they occurred in patients who had just had a heart attack.
It made perfect sense. Previous studies had shown that PVCs associated with a higher risk of sudden death. They were a marker or surrogate for a bad outcome. Previous studies had also shown that anti-arrhythmic drugs (AAD) suppressed PVCs.
The therapeutic fashion of the day, therefore, became to prescribe AADs to suppress PVCs after a heart attack. Experts recommended it and to go against the experience of experts meant breaking the standard of care.
CAST Trial
Somehow, people got the nerve to study this practice.
In CAST, about 1500 post-heart-attack patients with PVCs were randomly assigned to either placebo or an AAD. The primary endpoint (or question to be answered by the experiment) was death and cardiac arrest.
You can see from the slide that the therapeutic fashion of the day, the entrenched practice, was actually killing patients. A lot of them.
Young clinicians now take for granted the practice of avoiding AADs after heart attack. But this knowledge only came about because a group of doctors had the courage to subject an established practice to a proper scientific experiment.
An outside observer might ask: how could doctors not know this? Well, it’s easy to explain, and it is the reason why evidence is so important.
Consider the view of a single doctor. He or she would see a a patient having these scary PVCs. The doctor would prescribe the drug and observe a marked decrease in PVCs. Good, thought the doctor.
But in the 1980s, the death rate after heart attack was so high that is was normal for these patients to die. Thus, when a patient did not turn up for follow-up two months later, no one was alarmed.
This is where evidence shows it strength.
When you randomize one group to placebo and the other to the drug, you can simply count up outcomes. We call these studies RCTs or randomized controlled trials.
RCTs are powerful because the randomization (mostly) balances all the variables and allows us to assess the effect of the drug. Since the two groups are balanced, if one group did better, we can say the drug caused the result.
People will rightly say that a trial provides an average effect—meaning, you can’t predict an outcome in any one individual. That is correct, but knowing the average effect in a select group of patients gives doctors a powerful tool.
There is much more to learn about evidence if we take the time to stop and think.
Stay tuned for more. If you like this introduction be sure to tell your friends.
The remarkable thing is that treated patients died at nearly 3x the rate of non-treated patients, a rate that would be readily apparent in any modern observational study, yet people consistently cite this the best example of why randomized trials are vital. If an observational study today found a treatment working this badly, would you demand a proper RCT before acting? How long did the CAST study take? What prompted it in the first place?