I teach my classes – even undergrads – that if I could change just one thing about the way in which research news is communicated to the public – for the sake of public understanding – it would be to include absolute risk/benefit data in each story or each message – not just relative risk/benefit data. On the HealthNewsReview.org website, we evaluate stories on whether they include absolute data. We also offer a brief primer on the topic. Failing to include absolute data may make the outcome seem much larger than it really is.
But even medical journals don’t do a good job of demanding that absolute data be included in article submissions, according to a review in this week’s BMJ. In it, Dartmouth/VA researchers looked at 222 articles over a year’s time in six journals: Annals of Internal Medicine, BMJ, Journal of the American Medical Association, Journal of the National Cancer Institute, Lancet, and New England Journal of Medicine.
They found that 68% of articles failed to report the underlying absolute risks in the abstract of the article, and half of these didn’t report absolute data anywhere in the article.
The authors’ conclusion: “Absolute risks are often not easily accessible in articles reporting ratio measures and sometimes are missing altogether—this lack of accessibility can easily exaggerate readers’ perceptions of benefit or harm.”
The relative inaccessibility of absolute data is no excuse for journalists. If you’re going to cover research news, you need to do it completely and correctly, and that includes getting at the absolute data – even if the journal article upon which you base your story doesn’t include such data. Journalists should demand it from researchers they cover, just as journals should demand it from researchers submitting articles.