Note to our followers: Our nearly 13-year run of daily publication of new content on HealthNewsReview.org came to a close at the end of 2018. Publisher Gary Schwitzer and other contributors may post new articles periodically. But all of the 6,000+ articles we have published contain lessons to help you improve your critical thinking about health care interventions. And those will be still be alive on the site for a couple of years.

Weak reporting of limitations of observational research

OBSERVATIONAL-STUDIES-298x300A research letter in this week’s JAMA Internal Medicine addresses an issue that has become a pet peeve of ours: the failure of medical journal articles, journal news releases, and subsequent news releases, to address the limitations of observational studies. Observational studies, although important, cannot prove cause-and-effect; they can show statistical association but that does not necessarily equal causation.

The authors, from the University of Auckland in New Zealand, analyzed a combined 538 documents including major medical journal articles, accompanying editorials in those journals, news releases by those journals, and news stories written about all of the preceding.

Why?

They wrote:

“Observational research is abundant and influences clinical practice, in part via publication in high-impact journals and dissemination by news media. However, it frequently generates unreliable findings. Inherent methodologic limitations that generate bias and confounding mean that causal inferences cannot reliably be drawn. Study limitations may be inadequately acknowledged and accompanied by disclaimers that diminish their importance.”

Here’s what they found:

“Any study limitation was mentioned in 70 of 81 (86%) source article Discussion sections, 26 of 48 (54%) accompanying editorials, 13 of 54 (24%) journal press releases, 16 of 81 (20%) source article abstracts (of which 9 were published in the Annals of Internal Medicine), and 61 of 319 (19%) associated news stories. An explicit statement that causality could not be inferred was infrequently present: 8 of 81 (10%) source article Discussion sections, 7 of 48 (15%) editorials, 2 of 54 (4%) press releases, 3 of 81 (4%) source article abstracts, and 31 of319 (10%) news stories contained such statements.”

Graphically, it looked like this in the published JAMA Internal Medicine research letter:

Screen Shot 2015-06-04 at 6.45.53 PM

That is an awful report card.

Why does it matter?  The authors summarize nicely:

“A possible consequence of inadequate reporting of limitations of observational research is that readers consider the reported associations to be causal, promoting health practices based on evidence of modest quality. Up to 50% of such practices prove ineffective when tested in randomized clinical trials. Giving greater prominence to the limitations of observational research, particularly in the publication abstract and journal press releases,might temper this enthusiasm and reduce the need for subsequent reversals of practice.”

We’ve written about dozens and dozens of examples of news stories and other media messages that have failed to address the limitations of observational studies, thereby misleading the public.

We’ve criticized major medical journal news releases for doing so – The BMJ and The Lancet, for example.

For years, we’ve posted a primer on this site for journalists, news release writers and the general public, to help them understand the limitations.  The primer is entitled, “Does the Language Fit the Evidence? Association Versus Causation.”

The exaggeration should stop.  Observational studies play an important role.  But communicators should not try to make them more than what they are.

 

You might also like

Comments

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.

Comments are closed.