Kevin Lomangino is the managing editor of HealthNewsReview.org. He tweets as @KLomangino.
Why do we review news releases?
Because the exaggerated claims sometimes made in these public relations documents can get passed along–with little independent analysis or scrutiny–to unsuspecting news consumers who may think they’re reading carefully vetted journalism.
Here are two fresh examples that demonstrate how this happens and why it’s a problem. In both cases, we applied our 10 systematic review criteria not only to the news release where the claim originated but also to a news story that appeared to be prompted by the news release. Both examples show how unsupported claims can flow unfiltered down the news stream, where they eventually reach patients and the public.
The biotech company Amgen tripped our radar with a news release about its cholesterol-lowering drug Repatha. Though the company called the study a “landmark” and claimed that the results show Repatha “significantly reduced the risk of cardiovascular events,” our reviewers could find no data in the release that would back up such statements. When the reviewers contacted Amgen for copies of the study abstracts named in the release, the company declined, instead referring us to the American College of Cardiology (ACC), the sponsors of an upcoming meeting where the results will be presented. The ACC informed us that the abstract is under embargo until the presentation on March 17.
Our reviewers were not content to take Amgen’s word for it and argued that journalists shouldn’t, either. “This early release strikes us as an effort to frame the discussion about the drug’s benefits in the trial without providing the required background needed for assessment,” they said.
A Reuters story reporting on the study shared Amgen’s positive framing, even though supporting data were not provided. As a result, our reviewers complained, it reads “more like marketing copy than journalism.” They acknowledged that the story was meant for an investor audience but noted that any average reader could find the story on a Google search the same way we did. In their own words:
We’d argue it’s more responsible to readers to wait for the actual data to be released–so it can be vetted by outside experts–than publicizing unverified results. At the very least, any story reporting on these results should be clear that they need to be consumed with a healthy side of caution and skepticism, given the company’s obvious incentive to frame the data positively.
Amgen’s study will eventually be presented, and I have no doubt that it will show a “significant” reduction in cardiovascular events as the release claims. But how big is “significant?” How reliable is the evidence supporting that result and were there any key limitations? How common are adverse effects? There are a host of questions that only can be answered by a careful review of the full study results. And those questions should be addressed before anyone is allowed to declare the study a “landmark.”
This is another example of how a study’s actual findings can get lost as it travels through different messengers on the way to consumers. A news release put out by the European CanCer Organisation appears to describe a test that could detect useful indications of early-stage, but serious, cancer in the breath of average people who seem healthy.
But as our reviewers pointed out, that’s not what the study was about at all.
The underlying trial merely showed that in most cases breath analysis could distinguish between people who were already known to have stomach or esophageal cancer (advanced in most cases) with people who did not have any signs of cancer.
It’s a lot easier to differentiate a small group of patients who are already known to have advanced cancer from those who don’t than it is to detect early cancer in a large population of apparently healthy people. And a test that may be “85% accurate overall” in the first scenario could very well be useless in the second one because it will generate a huge number of false-positive results.
But again, a HealthDay story based on the release was willing to pass along these inflated claims of accuracy, unvetted by any independent expert. The story also seconded the news release’s suggestion that the test could lead to improved survival, which is something the study never looked at.
We’ve seen this many times before–for example, when a big government-funded hypertension study was hailed as a landmark before any results were made available to the public. A year and a half later, we’re still learning new details about the study that may affect its application to real-world patients.
The takeaway here is that the health news stream is often polluted at the source–by news releases that make inflated claims based on insufficient evidence. Too many news outlets are content to distribute those claims to readers without adequate scrutiny.
Who’s looking out for the unsuspecting reader who is drinking from that polluted stream?
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Comments are closed.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like