Health News Review

Here we go again. Headlines across America today blaring lines like “Coffee may reduce stroke risk.”

It was a big study, but an observational study. Not a trial. Not an experiment. And, as we say so many times on this site you could almost join along with the chorus – observational studies have inherent limitations that should always be mentioned in stories. They can’t prove cause and effect. They can show a strong statistical association, but they can’t prove cause and effect. So you can’t prove benefit or risk reduction. And stories should say that.

USA Today, for example, did not explain that in its story. Nor did it include any of the limitations that were included in, for example, a HealthDay story, which stated:

“The problem with this type of study is that there are too many factors unaccounted for and association does not prove causality, said Dr. Larry B. Goldstein, director of the Duke Stroke Center at Duke University Medical Center.

“Subjects were asked about their past coffee consumption in a questionnaire and then followed over time. There is no way to know if they changed their behavior,” Goldstein said.

And, he noted, there was no control for medication use or other potential but unmeasured factors.

“The study is restricted to a Scandinavian population, and it is not clear, even if there is a relationship, that it would be present in more diverse populations. I think that it can be concluded, at least in this population, that there was not an increased risk of stroke among coffee drinkers,” he said.”

When you don’t explain the limitations of observational studies – and/or when you imply that cause-and-effect has been established, you lose credibility with some readers. And you should. Note some of the comments left on the USA Today website:

• “Within a few weeks a new ‘study’ will come out telling us how bad coffee is for us.”

• “Sign…I wish someone would make up their minds! Wasn’t it just a week or so ago there was a study about smog, coffee, etc., being bad for ya?”

• “Remember when “scientific” studies were considered trustworthy and reliable?? How can anyone tell the few pearls of knowledge in a world of pointless studies that flip-flop results and rehash incessantly??”

• “Drinking coffee reduces strokes per this study. Didn’t another say it causes cancer?”

USA Today wasn’t alone in being incomplete.

WebMD was just plain inaccurate when it stated: “1 or More Cups of Coffee a Day Reduces Stroke Risk in Women.” The study didn’t prove that.

CBSNews.com had a simply silly story that led:

” Ladies, you knew there was a good reason for that double mochachino you have every morning and maybe that one at lunch too.”

At least they came back later and explained:

“As for your mochachino, no word yet on the benefits of whipped cream and chocolate sauce.”

But why even go there to begin with?

ABCNews.com, by comparison, emphasized this study showed “association, not causation.” Kudos to them.

For anyone – journalist or consumer – or researcher, for that matter – who doesn’t grasp the importance of using the correct language to describe observational studies, please see our primer on this topic.

Comments

Jason Crain posted on March 11, 2011 at 11:22 am

Thanks for posting this. It’s refreshing to read.
It’s also worth noting that journalists are not entirely to blame. There is a disturbing cognitive slip that happens when this data is publicized. Take, for example, this quote from the National Cancer Institute:
“There have been no controlled clinical trials on the effect of regular physical activity on the risk of developing cancer. However, observational studies have examined the possible association between physical activity and a lower risk of developing colon or breast…”
Most journalists, and publicists, looking to gain social or economic capital for some cause, would blow this entire section completely out of proportion.