Why this matters to patients
Patient Dave deBronkart gives an example of how the size of a potential benefit may appear larger than it really is – depending on how the numbers are framed.
Scientist’s perspective on why stories must explain true scope of possible benefits
Karen Sepucha, PhD, emphasizes importance of knowing absolute not just relative data
Why Does This Matter?
- Many news stories tell us how wonderful a new treatment, test, product or procedure may be. Few provide helpful numbers to back that up.
- We have seen that news stories, drug ads, and some studies in journals tend to report benefits as the relative reduction in the frequency of something bad happening. For instance a drug is said to reduce the risk of hip fracture by 50 percent. But the absolute risk reduction is really only from 2 in 100 who don’t take the drug to 1 in 100 who do take it. It is true that 1 percent is half – or 50% – of 2 percent. But we think the relative risk number tends to inflate the impression of how much impact the drug has. The absolute risk reduction is 1 percent (2 percent minus 1 percent). When you hear about treatments that have an effect of 20 percent, 30 percent, 40 percent or higher, we encourage you to ask yourself, “50 percent of what?” That will help you remember: it may only be from 2 in 100 to 1 in 100.
- Steve Woloshin and Lisa Schwartz of Dartmouth Medical School and the VA Outcomes Research Group in Vermont explain absolute-relative risk in a creative way. They say that knowing only the relative data is like having a 50% off coupon for selected items at a department store. But you don’t know if the coupon applies to a diamond necklace or to a pack of chewing gum. Only by knowing what the coupon’s true value is – the absolute data – does the 50% have any meaning.
- Another question you might ask is, “How many people have to be treated in order to prevent even one problem?” With cholesterol-lowering statin drugs, the number might be 50. TIME magazine once published a column on the Number Needed to Treat (NNT), calling it “Medicine’s Secret Stat.” And BusinessWeek published a story on the NNT for statin drugs.
- It’s insufficient to say “significantly increased.” What does that mean? How was it measured?
- Statistical significance may not equal clinical significance. What difference did it make in peoples’ lives?
- The plural of anecdote is not data. Patient vignettes make engaging reading but they are not data. When a story is top heavy with personal stories, it makes it hard for readers to sustain their critical thinking when (if) they get to information that is quantitative. If you hear glowing patient anecdotes about how well something worked, always ask yourself if that was a representative example. A single unchallenged exaggerated success story can throw the whole framing of benefits out of balance.
- Reporting only surrogate or intermediate endpoints, changes in blood test scores, etc. may not be a true benefit – may not influence individual health outcomes. What difference did the intervention make in peoples’ lives?
Thumbs Up Examples
The story was crystal clear about the lack of benefit from the use of hormone therapy to treat older men with localized prostate cancer in terms of overall survival or risk of death from prostate cancer. The story was very complete and then included information about the subset of patients – those older men with localized prostate cancer with a more aggressive cell who comprised about 5% of the study population and appeared to derive some benefit in terms of reducing the number that died of prostate cancer but still without enhancement of overall survival.
There are no data showing the products work, and the reporter makes this clear.
Lacking data to report, the author clearly lays out the company’s claims and then includes rebuttals from experienced allergists.
The story noted something that rarely gets mentioned: “a recent Agency for Healthcare Research and Quality study, which found that no prostate-cancer treatment was superior to the others. The report also noted the lack of good comparative studies.”
Thumbs Down Examples
This broadcast falls down in virtually every possible way, but stands out for failing to address the most important reason for having spine surgery. The story doesn’t say what the outcome of surgery was for the patient interviewed or anyone else in the medical literature, other than to point out that the operation went like clockwork and the patient was heading home by 2:00 in the afternoon of the day she had the surgery. Although this is one outcome that some people care about, it says nothing about the reason she had the operation to improve daily, disabling pain.
There was no quantification of benefit in the story – strange when the entire story is about the drugmaker seeking new approval for the drug. Upon what evidence is that request based?
Using the phrase “showed significant improvement in mental tests” does not meet our standard for quantifying benefits. What does this mean?