The following is a guest column by Mark Zweig and Emily DeVoto, two people who have thought a lot about how reporters cover medical research.
A health writer’s first attempt at expressing results from a new observational study read, “Frequent fish consumption was associated with a 50% reduction in the relative risk of dying from a heart attack.” Her editor’s reaction? Slash. Too wordy, too passive. The editor’s rewrite? “Women who ate fish five times a week cut their risk of dying later from a heart attack by half.” This edit seems fair enough – or is it? The change did streamline the message, but with a not-so-obvious, unintended cost to the meaning. Was the subjects’ fish consumption really responsible for their dying less frequently from heart attacks? The new wording suggests that’s the case, but the original study does not support a conclusion of cause and effect.
Epidemiologic – or observational – studies examine the association between what’s known in epidemiologic jargon as an exposure (e.g., a food, something in the environment, or a behavior) and an outcome (often a disease or death). Because of all the other exposures occurring simultaneously in the complex lives of free-living humans that can never be completely accounted for, such studies cannot provide evidence of cause and effect; they can only provide evidence of some relationship that a stronger design could explore further. In other words, observational studies cannot distinguish direction—whether exposure A influences outcome B, or B influences A, or both are influenced by something else, even if that association may be strong and consistent. Other designs could illuminate a causal nature and direction of the relationship, if present.
The only study design involving humans that does rise to the level of demonstrating cause and effect is a randomized trial. In this design, study subjects are assigned an exposure (or a control condition) at random, irrespective of any other exposures in their lives, and all such other exposures are then assumed to even out between the treated group and the control group of subjects (and this can be demonstrated). As a result, the only difference between the groups is whether they receive the exposure under study. This approach is truly experimental. Because observational studies are not randomized, they cannot control for all of the other inevitable, often unmeasurable, exposures that may actually explain results. Thus, any link between cause and effect in observational studies is speculative at best.
A subtle trap in writing about health research for general audiences occurs in the transition from the cautious, nondirectional, noncausal, passive language that scientists use in reporting the results of observational studies to the active language favored in mass media. Active language is fine in general – who wants to write like a scientist? But problems can arise when the use of causal language is not justified by the study design. . For example, a description of an association (e.g., associated with reduced risk) can become, via a change to the active voice (reduces risk), an unwarranted description of cause and effect. There is a world of difference in meaning between saying “A was associated with increased B” and “A increased B.” The difference may seem subtle in terms of language, but is large in terms of meaning.
Indeed, in practice, a shift to causal language can occur at any stage, writing, editing, or headline composing, with similar effects on meaning. Without attention to the underlying design of studies, distortions of wording can creep in that could lead readers to overestimate the value of a given study and possibly make life choices that the evidence does not warrant.
Part of the job of the health journalist is to look at the description of the study design – ideally from the original article –and then to choose appropriate wording to describe results in the news piece. A problem for journalists may arise in the language that scientists and others use to describe the results of observational studies. Sometimes scientists and press-release writers slide into causal language in expressing results of observational studies, so borrowing their language warrants caution.
Below are examples of findings reported in the news in which the media mistakenly used causal language to describe the results of observational studies.
Study design |
Researchers’ version of results |
Media version of results |
Problem |
Prospective cohort study of dietary fat and age-related maculopathy (observational) |
A 40% reduction of incident early age-related maculopathy was associated with fish consumption at least once a week. |
Eating fish may help preserve eyesight in older people. |
Preserve and help are both active and causal; may help sounds like a caveat designed to convey uncertainty, but causality is still implied. |
Prospective cohort study of the relationship between energy expenditure and mortality in older adults (observational) |
Energy expenditure was strongly associated with lower risk of mortality in healthy older |
The authors calculated that participants who did 75 minutes a day of activities… lowered their risk of dying by 30%… |
Lowered their risk is causal; strongly associated with lower risk is not. |
Prospective cohort study of the relationship between coffee consumption and diabetes among postmenopausal women (observational) |
Coffee intake, especially decaffeinated, was inversely associated with lower risk of type 2 diabetes |
Overall, those who drank [coffee] were 22 percent less likely to have diabetes, with decaf drinkers reaping somewhat greater benefit… |
22 percent less likely is correct; reaping greater benefit is causal. |
Prospective cohort study of fish intake and coronary heart disease in women (Nurses’ Health Study; observational) |
Among women, higher consumption of fish… is associated with a lower risk of coronary heart disease. |
Women who ate fish 5 times a week cut their risk of dying later from a heart attack by half |
Cut their risk of dying is causal. |
Prospective cohort study of aspirin use and cancer incidence among U.S. men and women (observational) |
Long-term daily use of adult-strength aspirin may be associated with modestly reduced overall cancer incidence |
Higher aspirin dose seems to stave off some cancers… The strongest effect was for colon cancer. |
Stave off is causal and active; effect is causal. Seems to, used as a caveat, does not undo the implication of causality. |
Case-control study of alcohol use and risk of breast cancer (observational) |
Ever-use of alcohol over the past 20 years was associated with a 1.3-fold increased risk of breast cancer |
…drinking alcohol at any time in the previous 20 years increased breast cancer risk 30 percent |
Increased was converted into an active, causal verb, though researchers had used it as an adjective in a noncausal statement |
Nested case-control study of the relationship between acid suppression and hip fractures in patients (observational) |
Long-term [acid suppression] therapy, particularly at high doses, is associated with an increased risk of hip fracture |
Drugs that suppress acids may make fractures more likely…Taking proton pump inhibitors for more than a year increased the likelihood of a hip fracture by 44 percent |
Make fractures more likely is causal, as is increased the likelihood; the caveat may does not undo the suggestion of causality |
News writers sometimes attempt to qualify results by using such words as “seems,” “may,” or “appears.” These words are intended to convey uncertainty, which is a healthy impulse when describing imperfect studies (i.e., most of them), but they still leave the reader with the idea that, however uncertain the results, the relationship between the exposure and the outcome is one of cause and effect.
Although much of our concern is with passive verbs that reporters convert to active, or with adjectives (“lower” risk) that reporters convert to verbs (“lowered” the risk), nouns that imply causation are another frequent problem. For example, “the protective effect,” “protection,” or “the benefit” often appear in reports about observational studies.
An important part of the portrayal of the results of research in health news lies in attention to language that may in subtle ways imply cause-and-effect relationships, even where the underlying study design does not warrant such language. We urge health care journalists to be mindful of when causal language is warranted by the study design and when it is not. Health journalists’ vigilance for these subtleties will result in more accurate communication of research findings to the public.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Comments are closed.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like