Gary Schwitzer is Publisher and founder of HealthNewsReview.org. He tweets as @garyschwitzer.
In 2005, as I began building HealthNewsReview.org, I got permission from Dr. David Henry of the Media Doctor Australia project to adopt their news story review criteria. In the ensuing 13 years, no one has ever placed in front of me a solid, specific suggestion for a better set of criteria for reviewing stories about health care interventions. This was a patient-centered approach, laying out questions that patients needed answered when hearing news about treatments, tests, products and procedures.
Eventually, our U.S. project–and others in Canada, Hong Kong, Japan and Germany–adopted the same Australian criteria. Ours was the best-funded, with the biggest team, the greatest output, and the biggest reach in terms of audience, including internationally. Now we are ceasing our systematic reviews as we close down regular daily publishing next week.
So, one last time, we update our report card on the grades for more than 2,600 news stories and more than 600 PR news releases. As I’ve often said, I don’t think that all 10 of our review criteria have an equal weight or importance. The five that I list below, and the grades for them, are the five most important in my view. They are also the five categories that gave journalists and PR professionals the most trouble.
If you go to the news story review index page or any of the individual news story review pages, scroll down the page and in the right-hand margin you’ll see a colored pie chart. That chart image links to a page where you can see the “national average” of all of the 2,600+ news stories our team of reviewers has reviewed. For ease of display, how a story grades on the 10 criteria we use is translated into a star-score of 0-5 stars. The “national average” of all stories we’ve reviewed is 3.1 stars.
Below that first pie chart is another one, and a pulldown menu to let you choose any one news source at a time to see how its grade compares with the national average. The sample of stories varies widely among all the news organizations. More than 330 stories by the Associated Press and 260 stories by the New York Times, for example. But only 30 by Bloomberg, 19 by Buzzfeed, 6 by FiveThirtyEight.com, for example. Some news organizations made it harder to find relevant stories that would be eligible for review. Some news organizations we began reviewing only recently, while others have been looked at every day for 12+ years.
We would have preferred to have larger sample sizes for all news organizations. But we stretched our available funding to even get this much accomplished. And even in cases where we had smaller sample sizes, it’s still a larger sample size of this type of systematic criteria-driven reviews than anyone else in the world has ever done.
The once-venerable weekly newsmagazines’ online efforts are all below-average:
Each of the TV networks’ online news operations – except for CNN – was also below average across the board:
The distribution of news releases to journalists is where the faucet of polluted information often gets turned on. Not always. But as you can see from the grades in the chart above, news releases graded worse than news stories in each of the key categories.
While we have nearly 13 years of news story grades, our PR news release review database began just four years ago. Because we look at many more sources of PR news releases (government agencies, industry, academic medical centers, physician groups, journals, etc.) than we do for journalism, we don’t have the same interactive comparison chart for the PR products. The sample for any one organization is limited.
However, a few observations do stand out.
More important than those grades is why the content of those articles was graded that way. Common flaws, with links to our Toolkit section where we try to help people understand and address those flaws, are listed below.
One of the things that I’m most proud of about my team’s work is that we didn’t merely grade performance. We offered constructive criticism intended to improve performance. We did that in every review, with comments explaining why our reviewers judged something to be satisfactory or unsatisfactory. Beyond that, the primers and tips and resources in our Toolkit were one of the strongest collections of such aids to be found anywhere.
And all of that help will remain available online for some time to come even though we cease regular daily publication next week. The site – and these tools – will still be accessible. I hope to keep the site open for at least three years. So, please, take advantage of all the work that went into the development of these resources.
First, it is noteworthy that in this, our final year, we attracted 15% more visitors than in our previous highest year. How’s that for going out with a bang? Our weekly email digest was sent to 6,000 people every week. Our cumulative website traffic of millions of visits during our project’s existence is something I never anticipated when I launched this in 2006.
We also have published 2,870 blog posts. These often go beyond the boundaries of daily news reporting and PR news releases to touch on topics of health care advertising, marketing, journal policies, TV talk shows – any topic that influences the public dialogue about health care.
We produced 50 podcasts, interviewing patients, clinician-researchers, journalists and others. You can peruse the topics here.
We collected more than 20 stories of patient harms from misleading media messages about health care interventions.
We co-hosted a list of more than 100 industry-independent experts to help journalists find additional sources for their stories.
Our Toolkit contains dozens of resources for consumers and journalists to help them evaluate studies and to improve their critical thinking about health care interventions.
At our peak, we had 6.3 full-time-equivalent staffers and 50 occasional editorial contributors.
We were funded for 8.5 years by the Foundation for Informed Medical Decision Making (now defunct) and for four years by the Laura and John Arnold Foundation. In the middle of those years, we went 19 months without any funding, yet kept the site alive. I’ll try to do that again as long as I can with occasional future articles by me and freelance contributors.