Michael Joyce produces multimedia at HealthNewsReview.org and tweets as @mlmjoyce
Within the past couple of months the BMJ published two separate observational studies looking at how two very different lifestyle factors might impact memory and dementia.
Both studies draw from the same group of research subjects: the Whitehall II cohort, which has followed roughly 10,000 British civil service workers for the past 30 years or so. Here are what the studies investigated:
This captured our attention for two reasons.
First, the alcohol study — which found that moderate alcohol consumption was linked to some tissue wasting in one of the key memory centers of the brain (the hippocampus) – was widely covered by the media. But the exercise study – which didn’t find any association between physical activity level and cognitive decline – received no mainstream coverage we could find.
Could it be that the alcohol study – with its “positive finding” (that some people might erroneously equate with showing cause and effect) – is somehow more attractive, dramatic, or reportable than the physical activity study, with its “negative finding” (that some people might erroneously equate with “finding nothing”)?
In other words, is it possible that a headline which suggests moderate drinking is harmful to the brain is more click-worthy than one which can “only” claim physical activity level has no effect on memory? There were dozens of headlines featuring the alcohol study that certainly reinforce that notion. Here’s a sampling:
This is as good a point as any to raise the significant limitations of both these studies. Because both studies are observational, they can not make statements regarding cause and effect. Likewise, both studies rely on self-reporting (of drinking and exercise patterns) which is notoriously unreliable.
Furthermore, the cohort group from Whitehall II isn’t exactly representative of the public at large because it’s biased towards well-educated, middle class, white men.
What I haven’t told you yet — and it could certainly affect the disproportionate media coverage mentioned above — is that the BMJ issued a news release for the alcohol study but not the activity study.
Why is that?
If it’s for the same reasons noted above (i.e. that “positive” results are more attractive than “negative”) then it raises compelling questions regarding how media coverage — and therefore public opinion, conventional wisdom, and even public policy — can be dictated, to some extent, right from story inception.
Not just which studies get published, but which studies get released for general public consumption.
Dr. José Merino is the US research editor for The BMJ. In my email exchanges with him he quoted an email from his PR manager who wrote that the journal sends out news releases based on the following:
“Press releases are designed to generate news, and content is selected on the basis of its news potential. We make decisions based on what we think will be of interest to journalists – who will hopefully want to cover the story for print, broadcast, or online.”
I asked Merino for his take on why some studies generate news releases, others don’t, and how this impacts media coverage.
“Perhaps [in this case] it’s related to the topic of the paper. Is alcohol more interesting than exercise? Studies more likely to be picked up by the media are those that find an association between exposure and outcome (‘positive’ studies), as well as those that have a press release. These are probably linked because it is possible that journals may be more likely to issue a press release when a study finds an association.
In this case it’s hard to know whether a press release would have meant that the exercise paper would have been picked up by more outlets, but we expect it would have. This may represent another aspect of positive publication bias: we publish a ‘negative’ study but don’t press release it, and even if we had done so, it’s possible that the press would not have picked it up, or would have picked it up less than the ‘positive’ one.”
Merino acknowledges the possibility that both publishers and readers may have an unconscious — and even conscious — bias against “negative” studies, and thinks it would make for a compelling study.
I’m certainly not accusing the BMJ of intentionally choosing to highlight just positive findings. Nor am I saying the news coverage was universally superficial.
What I am pointing out is a flow of information from scientific studies to public consumption that can be modified or disrupted at several key points along the way – from the medical journals that publish studies to the headlines most of us read.
And I think it should give us all pause to realize that the person who controls the faucet is often a PR manager. This is not a scientist or health care professional. This is someone motivated primarily by the “news potential” of the study, and not necessarily the overall health of the public.
And I think it is imperative to bear in mind that these editorial choices ultimately do influence the health care choices made by real people.