A tale of two observational studies – peanuts, coffee, heart health – and how the journals & some journalists handled them differently

I saw this coming as soon as I saw the BMJ news release about a study published in one of its journals, Heart. The BMJ, which seemed to have turned a corner recently, starting to include at least boilerplate news release language about the limitations of observational studies, dropped the ball on a new one.

Larry Husten of Cardiobrief beat me to the punch with his piece, “No, Drinking Coffee Won’t Save Your Life or Prevent Heart Attacks.” Excerpts:

Once again the media has swallowed the bait hook, line, and sinker. Following the publication of a  a new study in the journal Heart last night, hundreds of news reports have now appeared extolling the miraculous benefits of coffee. Here’s just one typical headline from the Los Angeles Times: “Another reason to drink coffee: It’s good for your heart, study says.”

But a careful look at the study and previous research on coffee makes clear that this type of reporting is completely unwarranted. As I’ve written previously, the media loves to jump on studies like this and inform the public that, say, intense running is as bad as being sedentary. These sort of upbeat, highly positive stories making simple recommendations based on observational studies that are in no way capable of proving cause and effect are dangerous. Unfortunately, the journalists often receive support  from study authors and the journal editors and PR people who encourage  this gross misinterpretation and don’t take steps to refute these dangerous misconceptions.

So what went wrong? I think the blame in this case starts with the PR machinery of the journal Heartwhich is part of the BMJ (British Medical Journal) group. Here is the headline and subhead (of the BMJ news release) for this paper:

Moderate coffee consumption lessens risk of clogged arteries and heart attacks

People consuming three to five cups of coffee a day have lowest risk of clogging

It seems clear that this sort of headline is an invitation to distortion and exaggeration. I would propose that when publishing epidemiology studies journals and editors be extremely cautious and pro-active in press releases. Undoubtedly the media will continue to make mistakes, but there’s no reason why the medical establishment should be enabling these mistakes.

Examples of headlines to back up Husten’s point:

Coincidentally, another big observational study was published in another journal at the same time.  JAMA Internal Medicine published the paper, “Prospective Evaluation of the Association of Nut/Peanut Consumption With Total and Cause-Specific Mortality.” The journal’s news release – while it could have done a better job of emphasizing the limitations of observational studies – nonetheless never crossed a line with its language, sticking with “associated with” or “association” throughout. And the release quoted the authors, who wrote:

“We cannot, however, make etiologic inferences from these observational data.” (My note: Etiologic means assigning a cause.) 

And the journal published an accompanying editor’s note by Dr. Mitchell Katz, who wrote:

“Multiple studies have demonstrated the beneficial effects of eating nuts. Nonetheless, the editors felt it was worth publishing another such study for 2 reasons. First, this study combined 3 cohorts to produce a large and diverse sample, including a predominately low socioeconomic cohort of Americans and 2 Chinese cohorts. The authors found that higher nut intake was associated with lower mortality in all 3 cohorts. The consistency of the results between the cohorts and with prior studies that have been performed in higher-income populations increases our confidence that the beneficial effects of nuts are not due to other characteristics of nut eaters.”

I think such an editor’s note provides important perspective.  And, in a few words, it captures what I often try to include in my writing when I criticize studies, news releases and news stories for failing to mention the limitations of observational studies.  Namely:

  • There comes a time when observational data is so big (“large and diverse sample” in this case) and so consistent (as mentioned above) that reasonable people, with some confidence, would begin to conclude that action can be taken based on the research.  After all, that’s how/why health policy action was taken on smoking years ago.

Did the editorial and the news release make a difference in news coverage?  Maybe.  See these examples:

  • Reuters reported:  “The study didn’t randomly compare peanut eaters to people who don’t eat peanuts, which is the gold standard method for proving a benefit. An observational study like this can’t prove that eating peanuts caused people to live longer.”  The story then went on to quote Katz’s editor’s note.
  • TIME reported:  “But scientists warn that the study was based on observational data collected from questionnaires, rather than clinical trials, so they cannot determine whether peanuts are specifically responsible for a lower risk of death.”

As always, though, many others missed the message, with headlines such as:

News release writers, journalists, and news consumers can learn much more from our primer,Does the Language Fit the Evidence? – Association Versus Causation.” 


Follow us on Twitter:



and on Facebook.

You might also like

Comments (6)

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.


March 3, 2015 at 6:11 pm

Glad someone is keeping these clickbait “news” releases in check. True journalism is becoming an endangered species.

Adrian O'Dowd

March 6, 2015 at 12:11 pm

I was responsible for writing the press releases on the Heart paper on coffee consumption and the Annals of the Rheumatic Diseases paper on gout and noticed your comments on HealthNews Review.

We (The BMJ) take your comments on board, but I wanted to respond.

The BMJ has an extremely thorough process for the selection and writing of press releases, which are distributed to more than 6,000 recipients.

When writing the releases, a draft is always sent to the relevant researchers/authors of the study to check that they are completely happy with the wording of the release. This was the case with these press releases. We do not issue a press release until the study authors have given their approval.

The press releases are always accompanied by the full paper, any linked editorial or podcast, and the authors’ contact details under embargo, so journalists have time to talk through the findings with authors before publication.

It is certainly not the BMJ’s policy to sensationalise research, but we write our news releases in such a way that they are accurate and easy to digest for a wide audience.

In the case of the coffee release, your point about the headline is fair comment and I accept that that it could have been misconstrued. Conveying uncertainty in a short headline is always difficult, and this example shows how it can all too easily be turned into clickbait.

However, the word “association” was used several times in the press release rather than saying there was a “definite link” and the sub-headline is accurate, as far as this study’s findings were concerned.

Indeed, the press release has a quote from the authors saying that the study “adds to a growing body of evidence suggesting that coffee consumption might be inversely associated with CVD risk” and says that further research is warranted to confirm their findings, which underlines the fact that they are not claiming a link is by any means certain.

Similarly with the gout release, I accept the headline could have been be misconstrued, but it is tricky to convey uncertainty in a short headline and hopefully that becomes clearer when the journalist reads further and, hopefully, reads the actual paper itself or speaks to the authors.

I hope these comments are helpful.

    Gary Schwitzer

    March 6, 2015 at 12:27 pm


    Thanks for your note.

    If you read my blog posts, you’ll see that I’ve had a long-running back-and-forth with BMJ about its news releases about observational studies. And now you’ve been pulled into it.

    I don’t think that sending a draft news release to the relevant researchers/authors necessarily solves the problem I’m raising.

    Someone in the editorial office must take responsibility for the content of the news releases because of their known impact on the journalists who receive them, and on members of the general public who read the news stories that are impacted by the news releases.

    I would argue that relying solely on the use of the term “association” goes over the head of many journalists, and then, over the head of many news consumers.

    I don’t think it’s as tricky to convey observational data in a short headline. Just don’t use causal verbs. We offer a primer that might help. It presents various ways of accurately writing about observational research.

    All we’re looking for in the body of the news release are a few accurate words, such as: “This is an observational study so no definitive conclusions can be drawn about cause and effect.”

    Please don’t rely on journalists to get it right by reading the paper or by speaking with the researchers. That is a lofty ideal that I embrace. But you and I both know it doesn’t happen many times – perhaps the majority of the time – in the real world.

    You acknowledge that these headlines and releases “could be misconstrued.” We’ve gone beyond the hypothetical. We’ve seen over and over how they indeed are misconstrued. And they confuse and frustrate readers. We’ve pointed to examples of this in the past.

    I’d be happy to continue the conversation. I’d rather do that, and see improvement – as I thought had happened late last year after exchanges I had with Trish Groves and Emma Dickinson – than to continue to criticize a practice that is so easily fixable.

    I’m pleased that you entered a comment here so that we can bring this dialogue online for all to see, learn from, and comment on.

    Thanks again for taking the time to write.


    Gary Schwitzer
    Publisher, HealthNewsReview.org and the Health News Watchdog blog
    Adjunct Associate Professor, University of Minnesota School of Public Health
    Director, Center for Media Communication and Health, UMN SPH


March 9, 2015 at 11:06 am

Goodness gracious…..” I accept the headline could have been be misconstrued” and ” your point about the headline is fair comment and I accept that that it could have been misconstrued ” – the BMJ cut and paste response blather. How about this one ” fire trucks are associated with causing car accidents ” – every time I happen upon a car accident there is always a fire truck (or 2) present. Observational ‘research’ should never be part of my morning radio news or my evening TV news – but the BMJ needs to market the BMJ.

Daniel Pendick

March 9, 2015 at 12:59 pm

I feel that many of us in the health reporting racket are still clinging to the idea that it’s excusable to use causal language in headlines because the rules are somehow different in headlines. The idea seems to be that you can cheat a little in the headline to communicate the important implication of the study and grab people’s attention, even if the study itself does not definitively prove causation. But I’ve felt more and more uncomfortable with it–for the reasons Gary identifies. It can help to put the entire press response on a misleading trajectory.

So how about using the “linked to” approach. It seems like a pretty good approximation of “associated with,” and feels less jargony. Resulting headline: “Peanuts linked to same heart, longevity benefits as more pricey nuts.” Does it work for a mass audience? Obviously it has a weaker impact then a “strong” causative verb, but it does have the advantage of being accurate.