“Last week I wrote twice about exercise. Strictly speaking, both stories were complete lies.”
Well, Larry is speaking strictly there. But his heart is in the right place. Go to Cardiobrief and read the entire post, but just to capture it here, the following is what Husten thought about the first study:
The association of fitness and low mortality leads to the recommendation about “the importance of physical activity.” An accompanying editorial went further, recommending that “prescription of physical activity should be placed on a par with drug prescription.” Widespread media coverage of the study followed suit, with nearly all reports emphasizing the positive effects of exercise.
So what’s wrong here? It almost seems churlish to insist on the point, but of course the study (like all other observational studies) didn’t– couldn’t– actually say anything about the real effect of exercise on health. It seems reasonable to assume that more exercise leads to increased fitness leads to improved health. That’s what we all probably think and believe. It’s common wisdom. But it’s not entirely unreasonable to suppose that healthy people are much more likely to exercise, in effect reversing the cause and effect. And of course there may be other confounding factors that cloud the simple equation of exercise and health.
There’s more: even if you could prove that more exercise leads to better health that wouldn’t lead to an automatic conclusion that doctors should recommend exercise as much as drugs. First you would need to prove that an exercise prescription is just as effective as a drug prescription. It’s hard enough to get people to take inexpensive, life-saving drugs once a day. Is there any reason to think we can get any kind of effective level of compliance with an exercise prescription?
He goes on to write:
This works well for a professional readership, but it’s hard to imagine how this could be reported to consumers in a responsible way. Every story about an observational study would need to turn into a lesson in the limits of such studies, though this would no doubt please Gary Schwitzer and his great reviewers at HealthNewsReview.
Yes, we think every story about an observational study does need to include at least a little lesson on the limits of such studies. To fail to do so is incomplete and probably means the story is inaccurate and misleading for readers. We’ve published a primer on this topic – “Does The Language Fit The Evidence? – Association Versus Causation” – that’s longer than it needs to be – so its length should not be intimidating. The message is quite simple: observational studies – while important with a distinct role in the hierarchy of evidence – nonetheless cannot prove cause and effect. So language that states or implies a causal link from an observational study is wrong.
As noted last week, some heart journals are getting on the bandwagon – warning scientist-authors that that “inappropriate word choice to describe results can lead to scientific inaccuracy.” We’re pleased to see Cardiobrief address the issue as well.
Follow us on Facebook,
and on Twitter: