Why journalists should take greater care when reporting on alternative health treatments

Kathlyn Stone is an associate editor at HealthNewsReview.org. She tweets as @KatKStone.

A couple of recent headlines dealing with complementary and alternative medicine (CAM) caught our attention:

  • “Acupuncture may be effective painkiller in the ER” — HealthDay News
  • “New study shows promise of yoga in treating back pain” — CBSNews.com

When it sounds too good to be true, it probably is. When we reviewed these stories, we found that in both cases journalists were not adequately scrutinizing the studies and ultimately mischaracterized the findings.

In our review of the HealthDay story, we disagreed with the premise of the article, which is evident in our headline, “Acupuncture in the ER: No, study did not prove it was ‘safe and effective.’”

That’s because the published study determined that “none of the examined therapies provided optimal acute analgesia.”

Meanwhile, the CBS News story was about a study that compared the benefits of three different interventions – yoga, physical therapy and reading educational materials – on alleviating chronic back pain.

Our takeaway: While the CBS News story made it clear that yoga wasn’t a clear winner, it left out a key detail: Neither yoga nor physical therapy were statistically better than education (the control group).

Why are alternative therapies framed so positively?

In both story examples, it appears as though journalists weren’t examining the source studies critically and perhaps relying on a PR release that put a positive spin on the findings.

So what is going on? Why are non-conclusive CAM studies being framed so positively, with no or few cautions about the research?

Tim Caulfield

Tim Caulfield has some ideas. He is one of North America’s leading skeptics on popular health advice, particularly that of the celebrity variety. Caulfield is research director of the Health Law Institute at the University of Alberta. But he’s probably better known among health news analysts as the author of the book Is Gwyneth Paltrow Wrong About Everything?

“I suspect there is a good deal of ‘white hat’ bias in the coverage of CAM studies,” Caulfield told me in an email. White hat bias refers to a bias toward research that’s not funded by industry and may involve “cherry-picking” the evidence to make it seem more significant than it is.

“For example, yoga is a light exercise that is popular and pretty noninvasive. Who wouldn’t want it to work as a therapy? But the media needs to take greater care when evaluating these studies,” Caulfield said. “What is the intervention being compared to? Often, the media reports a positive result when the results could equally be described thus: ‘this CAM intervention works as well as other stuff that doesn’t work well.'”

Caulfield also pointed out that it is often near impossible to properly blind these studies, so the placebo effect and “regression to the mean” (or an averaging out of the results over time) almost always play a big role. 

“Finally, I think people assume that there is no science hype in the context of CAM. But, as studies have demonstrated, there is often a strong tendency to publish positive results in this context.” 

For the yoga/back pain study we mentioned earlier, it wasn’t just CBS News that made yoga sound more effective than it was. These news outlets committed the same mistake:

While there may not be big bucks in alternative therapies–at least not when compared to, say, cancer treatment–journalists still need to scrutinize the findings as closely as they would any other study.

As Caulfield notes, “just because it is CAM research, doesn’t mean hype isn’t a problem.”

You might also like

Comments (3)

We Welcome Comments. But please note: We will delete comments left by anyone who doesn’t leave an actual first and last name and an actual email address.

We will delete comments that include personal attacks, unfounded allegations, unverified facts, product pitches, or profanity. We will also end any thread of repetitive comments. Comments should primarily discuss the quality (or lack thereof) in journalism or other media messages about health and medicine. This is not intended to be a forum for definitive discussions about medicine or science. Nor is it a forum to share your personal story about a disease or treatment -- your comment must relate to media messages about health care. If your comment doesn't adhere to these policies, we won't post it. Questions? Please see more on our comments policy.

Alan Cassels

July 18, 2017 at 2:31 pm

While it’s good to remind ourselves how much hype and exaggeration can be involved in the reporting of complementary medicine, and the many poor ways it is studied, we should not throw out the baby with the snake oil. Many people who complain about the ‘anti-science’ nature of complementary medicine are unaware of how extensively some of it has been studied. And some would be very surprised at the number of interventions supported by randomized trials. According to the most recent edition of the Cochrane library there are currently 759 protocols and reviews related to complementary and alternative medicine.–see link here: http://cam.cochrane.org/cochrane-reviews-and-protocols-related-complementary-medicine My suggestion to journalists wishing to cover any intervention, whether it be conventional or alternative, the first stop should be a peek into the Cochrane Library. This will let you know if the subject has been studied, how well or how thoroughly it’s been studied, and if there are any systematic reviews available on the treatment in question. (My conflict of interest: I have no financial ties to the Cochrane Collaboration but I am the author of the book, The Cochrane Collaboration: Medicine’s Best Kept Secret (Agio, 2015)

Janet e Smith

July 21, 2017 at 1:44 pm

Caulfield has a slight bias to Western hospital and doctors office medicine, which is also in many cases less than what it claims, and sometimes downright wrong and harmful, though, as with alternative medicine, there are exceptions, and with both, there are excellent and important knowledge and practices We live in a fortunate time, when more is known about all kinds of health care, thus we can work at making informed decisions for ourselves and our families, and it is our responsibility to do so. Do read Caulfield, but be wise and come to your own conclusions by adding your own experience and the effect on your health. Whatever you choose at a given time, always ask questions, never leave everything to a physician without doing some research yourself, it can only help. When an expert does not want your views, you might need to find another expert, one who want to keep learning, and who respects your ability and right to learn and to care for your own health.

William M. London

July 24, 2017 at 7:30 pm

The fundamental problem is that many journalists buy into the preferred semantics of promoters of non-validated and invalidated health care interventions, which include using euphemistic marketing buzzwords such as “alternative,” “holistic,” “integrative,” “functional,” “natural,” “non-Western,” and “complementary” as if these represent conceptually meaningful or coherent types of health care. Such buzzwords misleadingly imply safety, effectiveness, validity, and sometimes even plausibility. When we encounter “alternative” used as an adjective for an intervention, a common mistake is to infer that the intervention is a VIABLE alternative. I called for health educators to eschew use of euphemistic buzzwords for aberrant methods of health care. (See https://sciencebasedmedicine.org/please-dont-define-complementary-and-alternative-health-practices/ ) HealthNewsReview has advised journalists not to use eight words in medical news that are commonly used by promoters of euphemistic buzzword medicine. (See https://www.healthnewsreview.org/toolkit/just-journalists-writing-tips-case-studies/8-words-and-more-you-shouldnt-use-in-medical-news/ ) I suggest that journalists also avoid using terms as “alternative medicine,” especially when such use gives the misguided impression that different standards should apply to assessing safety, effectiveness, and validity and the misguided impression that implausible rationales underlying interventions should be disregarded.