Kevin Lomangino is the managing editor of HealthNewsReview.org. He tweets as @KLomangino.
Why do we review news releases?
Because the exaggerated claims sometimes made in these public relations documents can get passed along–with little independent analysis or scrutiny–to unsuspecting news consumers who may think they’re reading carefully vetted journalism.
Here are two fresh examples that demonstrate how this happens and why it’s a problem. In both cases, we applied our 10 systematic review criteria not only to the news release where the claim originated but also to a news story that appeared to be prompted by the news release. Both examples show how unsupported claims can flow unfiltered down the news stream, where they eventually reach patients and the public.
The biotech company Amgen tripped our radar with a news release about its cholesterol-lowering drug Repatha. Though the company called the study a “landmark” and claimed that the results show Repatha “significantly reduced the risk of cardiovascular events,” our reviewers could find no data in the release that would back up such statements. When the reviewers contacted Amgen for copies of the study abstracts named in the release, the company declined, instead referring us to the American College of Cardiology (ACC), the sponsors of an upcoming meeting where the results will be presented. The ACC informed us that the abstract is under embargo until the presentation on March 17.
Our reviewers were not content to take Amgen’s word for it and argued that journalists shouldn’t, either. “This early release strikes us as an effort to frame the discussion about the drug’s benefits in the trial without providing the required background needed for assessment,” they said.
A Reuters story reporting on the study shared Amgen’s positive framing, even though supporting data were not provided. As a result, our reviewers complained, it reads “more like marketing copy than journalism.” They acknowledged that the story was meant for an investor audience but noted that any average reader could find the story on a Google search the same way we did. In their own words:
We’d argue it’s more responsible to readers to wait for the actual data to be released–so it can be vetted by outside experts–than publicizing unverified results. At the very least, any story reporting on these results should be clear that they need to be consumed with a healthy side of caution and skepticism, given the company’s obvious incentive to frame the data positively.
Amgen’s study will eventually be presented, and I have no doubt that it will show a “significant” reduction in cardiovascular events as the release claims. But how big is “significant?” How reliable is the evidence supporting that result and were there any key limitations? How common are adverse effects? There are a host of questions that only can be answered by a careful review of the full study results. And those questions should be addressed before anyone is allowed to declare the study a “landmark.”
This is another example of how a study’s actual findings can get lost as it travels through different messengers on the way to consumers. A news release put out by the European CanCer Organisation appears to describe a test that could detect useful indications of early-stage, but serious, cancer in the breath of average people who seem healthy.
But as our reviewers pointed out, that’s not what the study was about at all.
The underlying trial merely showed that in most cases breath analysis could distinguish between people who were already known to have stomach or esophageal cancer (advanced in most cases) with people who did not have any signs of cancer.
It’s a lot easier to differentiate a small group of patients who are already known to have advanced cancer from those who don’t than it is to detect early cancer in a large population of apparently healthy people. And a test that may be “85% accurate overall” in the first scenario could very well be useless in the second one because it will generate a huge number of false-positive results.
But again, a HealthDay story based on the release was willing to pass along these inflated claims of accuracy, unvetted by any independent expert. The story also seconded the news release’s suggestion that the test could lead to improved survival, which is something the study never looked at.
We’ve seen this many times before–for example, when a big government-funded hypertension study was hailed as a landmark before any results were made available to the public. A year and a half later, we’re still learning new details about the study that may affect its application to real-world patients.
The takeaway here is that the health news stream is often polluted at the source–by news releases that make inflated claims based on insufficient evidence. Too many news outlets are content to distribute those claims to readers without adequate scrutiny.
Who’s looking out for the unsuspecting reader who is drinking from that polluted stream?