Criterion #4 Does the story seem to grasp the quality of the evidence?


Ideally, healthcare interventions are subjected to rigorous testing to prove they work. When reading about a new intervention or diagnostic tool, people should come away with a sense of how rigorous the evidence is for the intervention. 

61% of the news stories we’ve reviewed were rated Not Satisfactory on this criterion.

When rating a story on this criterion, we look to see if it included enough details about the study and how it fits into the hierarchy of evidence. As in: Was it an animal study? An observational study? A small, safety trial? A large, randomized controlled trial? A meta-analysis or systematic review? A recommendation from a task force that reviews evidence? And regardless the type of evidence, what was the quality of the evidence?

Not all studies are equal. Not all evidence is bullet-proof. There is not a certainty or a finality to everything that appears in the New England Journal of Medicine or any other journal. And just because it’s a study – or just because it’s published somewhere – the story is not over. That’s not the way science works.

That’s why we expect journalists to critically evaluate the evidence, not merely to take published or presented research as gospel. Yet many stories:

  • Present anecdotes as evidence of a treatment’s benefits – rather than as a single illustration of its use.
  • Leave out study limitations–even though all studies carry limitations.
  • Fail to caution readers about interpretation of uncontrolled data.
  • Fail to explain if a primary outcome is a surrogate marker or fail to caution readers/viewers about extrapolating this to health outcomes.
  • Fail to point out the limited peer review that may have taken place with findings presented at a scientific meeting.
  • Conflate causation and association – failing to explain limitations of observational studies.
  • Get caught up in reporting on the latest study without reporting on larger, better-designed studies that have been done already.

Questions to ask:

  • What are the limitations of the evidence?
  • Was the study done in only a few people?
  • Was the study done for only a short time? What might happen long-term? Will there be follow-up?
  • Did the study report on an outcome that you really care about – like illness or death? Or did it only report on test results, markers, or scores?
  • Did this information come from a talk presented at a scientific meeting? If so, you should know this kind of research is often considered preliminary because other experts haven’t had a chance to thoroughly review it.
  • Were the findings from an animal or lab experiment that might not be applicable to human health?
  • Did the information simply present anecdotes as evidence of a treatment’s harms or benefits – rather than real numbers from the entire study group?

Gary Schwitzer is the founder and publisher of HealthNewsReview.org:

Satisfactory examples

Well done: STAT story on extended use of aromatase inhibitors for breast cancer Our reviewers noted this story was strong on this criterion. It discussed the type of study (randomized and controlled) yet noted an important caveat to the research: That the study was too short to determine if the intervention improved survival.

WSJ wisely reports on Ebola vaccines: ‘It is unknown…what antibody levels are needed to protect patients’
The story reveals to readers an important clue: That the new research was the “first placebo-controlled study of two vaccines against the Ebola virus.” It is also clear in pointing to the study’s shortcomings, offering several different limitations, such as “Ebola cases in Liberia began to dwindle early in 2015, and the outbreak there was declared over on May 9 of that year. By the time this work was fully under way, it was too late to see if vaccines actually prevented Ebola sickness and death.”

Reuters Health deftly reports new data on hormone therapy for menopause
This one came down to details, details, and more details. Readers are informed of how many women were in the trial (a lot), for how long (quite a long time), and what the results were in the control and placebo groups. It also explored the limitations, too.

Not Satisfactory examples

ABC News touts affordability of at-home BRCA test without delving into any drawbacks
This one left readers in the dark. There is nothing in the story about the quality of the evidence. Was there a journal article published about the test? Did the FDA just approve the test for sale? What do we know about the evidence used for that approval?

Reuters turns unpublished pilot studies into news
Although devoid of many evidentiary details, reviewers noted that this text’s bigger problem is that it turns unpublished pilot studies into news. Research is considered preliminary until it has been peer reviewed and published. All news stories need to point that out when this is the case.

Plenty of context, but evidence was AWOL in WSJ story on uterine fibroid treatment
A personal testimonial from a celebrity with fibroids does not stand in for evidence one expects to see from a clinical trial or trials.

<< Click to go back to all 10 Review Criteria