New healthcare treatments should work well. When reading a story, people want to know: How effective is the intervention? Do the numbers back it up?
We expect news stories to explain what the researchers measured, in numerical terms. And to use absolute numbers whenever possible. Many news stories tell us how wonderful a new treatment, test, product or procedure may be. Few provide helpful numbers to back that up.
We also think it’s very useful to explain whether the findings make an actual difference in people’s lives. If a study says a new drug improved function by 30% in MS patients, what does that mean? How was that measured? What would an MS patient want to know?
Also, research often isn’t about an actual improvement in health or quality of life, it’s instead centered around surrogate outcomes, such as changes in blood test scores, or tumor shrinkage. These endpoints can be useful for researchers (and biotech investors), but surrogate outcomes do not automatically equal living better or longer. Readers need to know that.
If a story is about preclinical research like a mouse or monkey study, it must point out that researchers have no guarantee that the intervention will provide the same benefit to people.
And, stories that rely too heavily on patient anecdotes may be unrepresentative of the true benefit. The same goes for unchallenged exaggerated quotes (“this is a real game-changer/breakthrough/cure” etc), which can throw a story out of balance and overshadow the statistics. When a story is top heavy with personal stories, it makes it hard for readers to sustain their critical thinking when (if) they get to information that is quantitative. If you hear glowing patient anecdotes about how well something worked, always ask yourself if that was a representative example.
Victor Montori, MD, is a medical professor and diabetes specialist at the Mayo Clinic:
STAT’s cautious story provides interesting insights into challenges of Alzheimer’s research:
When it comes to quantifying the benefits, the story gave us numbers for both the experimental group and the placebo group. Specifically, “the patients who received all six low doses did the best; their average improvement after the 12 weeks was 1.5 points on the 100-point cognitive test.” It then says, “Patients receiving a placebo (most were also on standard Alzheimer’s drugs) lost 1.1 points.” We’re told the difference was not statistically significant.
With focus on one patient, story manages to convey complexity of T-cell therapy for leukemia
It can be risky to lean heavily on the viewpoint of one patient to talk about a new treatment, but this Philadelphia Inquirer story was well-balanced. It made sure to discuss, in succinct numerical terms, how many people in a related study were helped by the treatment, and how many were not.
Bravo, New York Times, for using absolute risk reduction numbers in story on fish oil for asthma
The New York Times smartly conveys risk reduction in this look at using fish oil for asthma. We learn, for example, that fish oil reduced the risk of asthma 31 percent, the relative risk benefit. That is tempered by inclusion of the absolute risk reduction–16.9% of mothers who supplemented had children with asthma, compared to 23.7 of mothers taking a placebo. This helps readers keep expectations in check.
Not Satisfactory examples:
To taper or not to taper off opioids? Vox lays out strengths and weaknesses of new study
This otherwise strong story fell short on this criterion. We’re only told that with tapering, based on the literature review, “patients on average saw improved pain, function, and quality of life.” But what do those terms mean, exactly, and how much of an improvement on those criteria — numbers-wise — are we talking about here? “Improved” isn’t enough information for patients making tough choices about how to manage their pain.
LA Times portrays PSA screening analysis as far more clear-cut and definite than even the authors claim
This story reports only the study’s estimate of a relative risk reduction of 25 to 32 percent. This figure is meaningless without reporting the underlying absolute risk of dying from prostate cancer. As readers of the STAT story (and our review) will find out, “for a man in the U.S., the risk of dying of prostate cancer is about 2.5 percent. A mortality reduction of 30 percent would lower the death rate to 1.75 percent.”
Washington Post’s otherwise well-reported story on ecstasy for PTSD skirts evidence discussion
This story reports that the drug ecstasy could be a “breakthrough” for those with PTSD. What does that look like in numerical terms? All we’re told is that slightly more than half of the 107 participants reported major reductions in PTSD symptoms. But this isn’t enough to understand the benefits. How was PTSD measured? How severe was it, on average, before the trial began? What do the researchers mean by “major” reductions? Was there a control group of patients, and what were those results like?