If you follow health care news in mainstream media, you’re going to be flooded with news from observational studies – research that is not a true experiment but, rather, what is seen by observing people doing different things over time. It’s valid and important research but one thing we can’t lose sight of: such research CAN NOT PROVE CAUSE-AND-EFFECT. It can only point to statistical associations, such as “It appears that people who do X are more likely to have Y happen.” As a friend of mine wrote, the rooster crowing in the morning does not make the sun come up, even though a statistical association between the two coinciding is high.
Last week we saw stories about citrus fruits protecting women from stroke. This week it was stories about “sleeping pills could kill 500,000.” This week we also had stories about “omega-3 fatty acids protecting the aging brain“….and about “Vitamin A may slash melanoma risk.” Sometimes it’s stories about lower risk (or protection), sometimes it’s stories about higher risk.
One thing is in common: almost all of the stories are simply wrong, using inaccurate language to describe the kinds of studies in question.
Week after week, year after year, for 6 years now, we have written about news stories that fail to explain the limitations of observational studies to readers. They use causal language – suggesting cause-and-effect findings – for studies that cannot prove cause-and-effect.
We don’t just criticize; we try to offer help. In that spirit, we offer a primer on our site entitled:
In it, we give examples of studies and examples of the words that are the only ways to accurately these studies.
The problems, the inaccuracies, the criticism would go away if journalists would simply read and act on this advice.
These are the kinds of stories that contribute to the background noise in the daily drumbeat of health care news stories – perhaps leading many consumers to be overwhelmed and to lose confidence in science and in journalism.