Why this matters to patients
When Dave deBronkart was diagnosed with advanced kidney cancer, he learned how to evaluate the evidence behind different treatment options, and he urges other patients not to be intimidated.
Dr. Steven Atlas, one of our medical editors
Why Does This Matter?
- Science has been compared to a winding stream. It takes many different turns with different ebbs and flows. Journalists on daily deadline who try to make sense of new studies can be like people who run up to the stream, get their toes wet, and then run back to file their story. But the stream keeps flowing and keeps changing downstream. That’s one of the reasons we need to have a good grasp of how solid is the evidence that is being reported at any point in the stream.
- We all should realize that not all studies are equal. Not all evidence is bullet-proof. There is not a certainty or a finality to everything that appears in the New England Journal of Medicine or any other journal.
- But often, just because it’s a study – or just because it’s published somewhere – seems to suggest that the proof is in. The story is over. And it almost never is. That’s not the way science works.
- Before you jump to conclusions about research results after hearing some publicity, ask yourself:
- Is there any information on the limitations of the evidence?
- Was the study done in only a few people?
- Was the study done for only a short time?
- Did the study report on an outcome that you really care about – like illness or death? Or did it only report on test results, markers, or scores?
- Did this information come from a talk presented at a scientific meeting? (If so, you should know that might be very preliminary information that other experts haven’t had a chance to thoroughly review.)
- Were the findings from an animal or lab experiment that might not be applicable to human health?
- Did the information simply present anecdotes as evidence of a treatment’s harms or benefits – rather than real numbers from the entire study group?
Common flaws in stories:
- Conflating causation and association – failing to explain limitations of observational studies.
- Failing to report on lack of a control group, lack of blinding, etc.
- Failure to include study dropout rate. Why did they drop out?
- Did the story report on a small group of patients at one medical center with experienced surgeons? How generalizable is this?
- Getting caught up in reporting on the latest study without reporting on larger, better-designed studies that have been done already (one recent story failed to mention that a recent Cochrane review had examined 106 papers on the same topic!)
- What does it mean to say “relatively inexpensive, painless and safe” ???
- Presenting anecdotes as evidence of a treatment’s harms or benefits – rather than as a single illustration of its use.
Thumbs Up Examples
The story described what is known about resveratrol as intriguing but very incomplete. It mentioned work done with mice, round worms and yeast. While mentioning the various health claims made for this compound, the story was appropriately circumspect about them. A parting comment from a scientist from the National Institute of Aging indicated that until we know how this compound works and what the benefit might be – offering on the open market was akin to pedaling snake oil.
The story made clear that the work was being done in small numbers of children, that there’s “a long way to go,” and that researchers “caution that much more research is needed to prove and perfect the approach and that it is far from ready for widespread use.”
The story is based on results of two high-quality clinical trials, and it describes the methodologies sufficiently.
The story does an excellent job of laying out the caveats very early: that the results won’t end the controversy over the value of prostate cancer screening; and that the studies continue and may provide clearer answers in the future.
The reporter gets extra points for mentioning too that the two studies aren’t directly comparable.
Thumbs Down Examples
The segment clearly explained the source of this information. Glamour magazine got a group of 7 women to follow some sleep guidelines. The medical correspondent should have taken the opportunity to help viewers understand the limitations of what can be learned by simply examining the experience of 7 individuals.
This sort of ‘evidence’ does not qualify as an objective investigation of how sleep affects body weight.
The story does not provide any evidence whatsoever. It’s not clear if the new drug was studied through randomized trials or some other type of trial design. Readers have no context for the type or strength of the evidence.
The story stated in the lead that patients receiving Provenge lived four months longer without specifying that it was four months longer than placebo, not standard therapy.
The story improperly compares the reported survival advantage of the experimental treatment to the expected survival of men receiving standard chemotherapy, implying the two treatments have been tested head-to-head.
The story also includes speculation that Provenge could be more effective when given earlier in the course of the disease, even though there is no evidence to support the statement.
Quotes at the end of the story from leaders of some patient advocacy groups overstated the evidence of effectiveness, but they seem to accurately reflect the perspective of these activists.