This TIME.com story suggests that this study will win over skeptics with the strength of its evidence. Perhaps it should have asked some skeptics whether they were, in fact, convinced. The competing HealthDay story was stronger.
Leaving aside the question of why the study being reported on was mysteriously withdrawn and then resubmitted to a different journal a year later with no explanation (an issue discussed in more detail below), this story still didn’t meet our standard for quality health journalism in most respects. It didn’t discuss costs, provided only relative (not absolute) risk reductions, and didn’t accurately describe the ramifications of certain statistical adjustments made by the authors.* Most important, it didn’t seek the opinion of an independent expert who could have shed some light on these issues, and who might just have tipped TIME off to the study’s mysterious and relevant backstory.
* A review of the source document reveals more about the – let’s call them – interesting statistical methods. The unadjusted primary outcome showed no difference between the two groups (hazard ration 0.62 (0.37-1.12). With a bit of adjusting for age, sex, and lipid lowering medications, they got the hazard ratio to 0.52 (0.29-0.92). Including education and level of depression, they further dropped the hazard ratio. Interestingly, we are not told how many of the subject in either group have diabetes. Also they did not look at the effect of smoking on the primary outcome despite what the story indicates. They looked at changes over time in smoking only and not as it related to the primary outcome. A pointy headed argument? Or a big oversight? Ignoring smoking and presence of diabetes seems like a great gaping hole in the analysis.
For the second time in as many weeks, an NIH-funded study of an alternative medical treatment has been published under murky circumstances. For reasons that aren’t entirely clear, the two studies have received markedly different receptions in the media. Last week, a troubled study of chelation, a controversial and potentially dangerous approach to lowering heart risk, was picked apart by the media and had its flaws exposed in fine detail. But this study of transcendental meditation seems to have escaped similar scrutiny by most mainstream news media, even though the circumstances surrounding its publication are also highly unusual and warrant additional investigation. As Larry Husten at CardioBrief points out, the study was originally submitted to Archives of Internal Medicine and had its scheduled publication put on hold just 12 minutes before it was scheduled to be released.* Now the study has reappeared in a different journal, with different data, and no explanation as to why the study was originally pulled or whether any concerns about the original manuscript were satisfactorily addressed. Cardiobrief identifies a number of other inconsistencies surrounding the paper that should certainly raise some red flags; we suggest you read his entire post for this context.
Now it’s unfair to expect that every reporter is going to be aware of this study’s strange back story. And it’s unclear at this point whether the inconsistencies that Cardiobrief identifies reflect anything more than poor communication and/or lack of transparency on the part of the authors and editors. Nevertheless, we think reporters should be putting themselves in position to ferret out this kind of background by talking to independent experts and skeptically appraising evidence. With the chelation study, experts seemed to be seeking out reporters in order to poke holes in the research and generate appropriate skepticism. There was no corresponding push with this study, and the result seems to be that some media outlets have largely accepted the results at face value.
This story did not attempt to put a price tag on meditation therapy, which as HealthDay pointed out can be expensive. It could also be pointed out that the mean household income of the participants was less than $18,000 annually.
The story tells us there was a “48% reduction in the overall risk of heart attack, stroke, and death from any cause among members of the meditation group compared to those from the health education group.” However, the absolute difference in outcomes between groups was not quite as striking as this figure would suggest. There were 20 events in the meditation group and 32 in the control group — data the story easily could (and should) have provided.
The story did not provide any information on cardiovascular endpoints which were secondary outcome measures. There were 4 deaths from cardiovascular disease in the meditation group and 5 in the health education subjects.
Though it doesn’t explicity report on harms, the story does imply that meditation “can’t hurt you,” which we’ll agree is an accurate sentiment. Any harms are likely to be indirect — i.e. choosing meditation (a possibly ineffective treatment) over something that is more proven.
However, the story whiffed on the important indirect harm of “opportunity costs.” Training and ongoing programming in TM is expensive especially given the mean household income of the participants.
All interventions have costs. All interventions have harms. All health care new stories – in our opinion – need to report on both.
This one’s a close call, as the study does introduce some doubtful notes when it points out the small number of participants in the study, and the lack of any clear explanation as to how meditation may be lowering heart disease risk. But we think the story’s overall evaulation is not sufficiently skeptical. The general sense is that the study researchers overcame any doubts about meditation by going to “great lengths .. to make their trial scientifically rigorous.” Specifically, the story notes that the researchers adjusted for the effects of weight, smoking behavior, and diet, to get a clearer picture of meditation’s benefits. And there’s the suggestion that this study, unlike previous research, allows us to “definitively credit the brain-focusing program with the better health results.”
This portrayal is not entirely accurate. As both Cardiobrief and the competing HealthDay stories point out, the results of this randomized study would have been stronger had they achieved statistical significance without the additional adjustments that the story describes. Randomization is generally supposed to balance out the groups so that this kind of statistical fiddling isn’t necessary. And when such fiddling results in the study result changing from negative to positive, that casts doubt on the strength of the finding. Accordingly, the portrayal of these adjustments as “scientifically rigorous” misses the point and is somewhat misleading. These stories also point out a variety of other factors that should give us pause when evaluating the usefulness of meditation for heart disease — perspective this TIME.com coverage lacks.
The only interview source for this story one of the study authors, Dr. Robert Schneider. Considering that Dr. Schneider works at an institution founded by the creator of transcendental meditation (Maharishi College of Perfect Health), his objectivity on this issue is certainly open to question. The story could have addressed any concerns about objectivity by balancing Dr. Schneider’s comments with those from an independent cardiology expert.
The story mentions “heart-friendly diet and exercise” and “medication” as elements of cardiovascular care. We’ll call it good enough.
Questions like the availability of meditation trainers, insurance coverage, and the acceptability/willingness of most patients to commit to meditation therapy over the long-term are not addressed.
The story provides a good sense of where this study sits in the universe of meditation research.
The story didn’t rely inappropriately on this press release.