Science news outlets were buzzing last week with findings from a major research initiative designed to replicate the results of studies in psychology.
The Reproducibility Project: Psychology re-ran 100 different studies that experts considered important foundational research in the field. This type of replication is important, explained project member Angela Atwood in a news release, because “Scientific evidence does not rely on trusting the authority of the person that made the discovery. Rather, credibility accumulates through independent replication and elaboration of the ideas and evidence.”
Alas, the effort to reproduce the studies did not go as well as one might have hoped — something that most media outlets emphasized in their coverage:
New York Times: Many Psychology Findings Not as Strong as Claimed, Study Says
Associated Press: Results of many psychology experiments can’t be duplicated, study finds
Of course, calling “bullshit” on the non-reproducible studies (as Gizmodo did) is taking things a step too far, as many other stories about the research pointed out. Although it’s possible that the original studies were flat out wrong and that’s why they couldn’t be duplicated, it’s also possible that the original studies were right, and the repeat study overlooked a real effect just by chance, the Associated Press notes. “Or both studies could be correct, with conflicting conclusions because of differences in how they were carried out,” AP explains.
Whatever the reason, the fact that many of these studies couldn’t be reproduced is certainly troubling news for the field of psychology. And psychology is far from alone in suffering from a replication crisis. As Stanford’s John Ioannidis commented to the Times, the problem could be even worse in fields such neuroscience, clinical medicine, and animal research. Indeed, Nature magazine has a special collection on reproducibility that spans all of these disciplines and more.
The Reproducibility Project did not study any psychological treatments — it focused on basic research into concepts like memory and behavior. Nevertheless, the connection to health journalism and our own project here at HealthNewsReview.org should hopefully be readily apparent.
We often caution health care journalists not to treat journal articles as if they were the stone tablets brought by Moses down from the mountaintop. In our review criteria, we ask journalists to probe for study limitations that might lead to an exaggerated or incorrect result. We ask for context on previous research and how the new research compares with other studies. We ask for independent sources who might cast a skeptical eye on the findings.
Insofar as they highlight the uncertainty of many study results, the conclusions of the Reproducibility Project underscore that these criteria are entirely appropriate and necessary elements of quality health journalism. And yet as our systematic reviews of news stories continue to demonstrate, all too often these elements are missing from news coverage of health studies.
Our project aims to change that.
Disclosure: The Reproducibility Project is funded by the Laura and John Arnold Foundation, which also supports HealthNewsReview.org.