Does Your Language Fit The Evidence?

Posted By



The following is a guest column by Mark Zweig and Emily DeVoto, two people who have thought a lot about how reporters cover medical research.


A health writer’s first attempt at expressing results from a new observational study read, “Frequent fish consumption was associated with a 50% reduction in the relative risk of dying from a heart attack.” Her editor’s reaction? Slash. Too wordy, too passive. The editor’s rewrite? “Women who ate fish five times a week cut their risk of dying later from a heart attack by half.” This edit seems fair enough – or is it? The change did streamline the message, but with a not-so-obvious, unintended cost to the meaning. Was the subjects’ fish consumption really responsible for their dying less frequently from heart attacks? The new wording suggests that’s the case, but the original study does not support a conclusion of cause and effect.

Epidemiologic – or observational – studies examine the association between what’s known in epidemiologic jargon as an exposure (e.g., a food, something in the environment, or a behavior) and an outcome (often a disease or death). Because of all the other exposures occurring simultaneously in the complex lives of free-living humans that can never be completely accounted for, such studies cannot provide evidence of cause and effect; they can only provide evidence of some relationship that a stronger design could explore further. In other words, observational studies cannot distinguish directionwhether exposure A influences outcome B, or B influences A, or both are influenced by something else, even if that association may be strong and consistent. Other designs could illuminate a causal nature and direction of the relationship, if present.

The only study design involving humans that does rise to the level of demonstrating cause and effect is a randomized trial. In this design, study subjects are assigned an exposure (or a control condition) at random, irrespective of any other exposures in their lives, and all such other exposures are then assumed to even out between the treated group and the control group of subjects (and this can be demonstrated). As a result, the only difference between the groups is whether they receive the exposure under study. This approach is truly experimental. Because observational studies are not randomized, they cannot control for all of the other inevitable, often unmeasurable, exposures that may actually explain results. Thus, any link between cause and effect in observational studies is speculative at best.

A subtle trap in writing about health research for general audiences occurs in the transition from the cautious, nondirectional, noncausal, passive language that scientists use in reporting the results of observational studies to the active language favored in mass media. Active language is fine in general – who wants to write like a scientist? But problems can arise when the use of causal language is not justified by the study design. . For example, a description of an association (e.g., associated with reduced risk) can become, via a change to the active voice (reduces risk), an unwarranted description of cause and effect. There is a world of difference in meaning between saying “A was associated with increased B” and “A increased B.” The difference may seem subtle in terms of language, but is large in terms of meaning.

Indeed, in practice, a shift to causal language can occur at any stage, writing, editing, or headline composing, with similar effects on meaning. Without attention to the underlying design of studies, distortions of wording can creep in that could lead readers to overestimate the value of a given study and possibly make life choices that the evidence does not warrant.

Part of the job of the health journalist is to look at the description of the study design – ideally from the original article –and then to choose appropriate wording to describe results in the news piece. A problem for journalists may arise in the language that scientists and others use to describe the results of observational studies. Sometimes scientists and press-release writers slide into causal language in expressing results of observational studies, so borrowing their language warrants caution.

Below are examples of findings reported in the news in which the media mistakenly used causal language to describe the results of observational studies.                                                                                                                                                      


Study design

Researchers’ version of results

Media version of results


Prospective cohort study of dietary fat and age-related maculopathy (observational)

A 40% reduction of incident early age-related maculopathy was associated with fish consumption at least once a week.

Eating fish may help preserve eyesight in older people.

Preserve and help are both active and causal; may help sounds like a caveat designed to convey uncertainty, but causality is still implied.

Prospective cohort study of the relationship between energy expenditure and mortality in older adults (observational)

Energy expenditure was strongly associated with lower risk of mortality in healthy older
adults. For every 287 kc
al/day in free-living activity energy expenditure, there is approximately a 30% lower risk of mortality.

The authors calculated that participants who did 75 minutes a day of activities… lowered their risk of dying by 30%…

Lowered their risk is causal; strongly associated with lower risk is not.

Prospective cohort study of the relationship between coffee consumption and diabetes among postmenopausal women (observational)

Coffee intake, especially decaffeinated, was inversely associated with lower risk of type 2 diabetes

Overall, those who drank [coffee] were 22 percent less likely to have diabetes, with decaf drinkers reaping somewhat greater benefit…

22 percent less likely is correct; reaping greater benefit is causal.

Prospective cohort study of fish intake and coronary heart disease in women (Nurses’ Health Study; observational)

Among women, higher consumption of fish… is associated with a lower risk of coronary heart disease.

Women who ate fish 5 times a week cut their risk of dying later from a heart attack by half

Cut their risk of dying is causal.

Prospective cohort study of aspirin use and cancer incidence among U.S. men and women (observational)

Long-term daily use of adult-strength aspirin may be associated with modestly reduced overall cancer incidence

Higher aspirin dose seems to stave off some cancers… The strongest effect was for colon cancer.

Stave off is causal and active; effect is causal. Seems to, used as a caveat, does not undo the implication of causality.

Case-control study of alcohol use and risk of breast cancer (observational)

Ever-use of alcohol over the past 20 years was associated with a 1.3-fold increased risk of breast cancer

drinking alcohol at any time in the previous 20 years increased breast cancer risk 30 percent

Increased was converted into an active, causal verb, though researchers had used it as an adjective in a noncausal statement

Nested case-control study of the relationship between acid suppression and hip fractures in patients (observational)

Long-term [acid suppression] therapy, particularly at high doses, is associated with an increased risk of hip fracture

Drugs that suppress acids may make fractures more likely…Taking proton pump inhibitors for more than a year increased the likelihood of a hip fracture by 44 percent

Make fractures more likely is causal, as is increased the likelihood; the caveat may does not undo the suggestion of causality

News writers sometimes attempt to qualify results by using such words as “seems,” “may,” or “appears.” These words are intended to convey uncertainty, which is a healthy impulse when describing imperfect studies (i.e., most of them), but they still leave the reader with the idea that, however uncertain the results, the relationship between the exposure and the outcome is one of cause and effect.

Although much of our concern is with passive verbs that reporters convert to active, or with adjectives (“lower” risk) that reporters convert to verbs (“lowered” the risk), nouns that imply causation are another frequent problem. For example, “the protective effect,” “protection,” or “the benefit” often appear in reports about observational studies.

An important part of the portrayal of the results of research in health news lies in attention to language that may in subtle ways imply cause-and-effect relationships, even where the underlying study design does not warrant such language. We urge health care journalists to be mindful of when causal language is warranted by the study design and when it is not. Health journalists’ vigilance for these subtleties will result in more accurate communication of research findings to the public.


We Welcome Comments. But please note: We will delete comments that include personal attacks, unfounded allegations, unverified facts, product pitches, profanity or any from anyone who doesn't list what appears to be an actual email address. We will also end any thread of repetitive comments. We don't give medical advice so we won't respond to questions asking for it. Please see more on our comments policy.

Comments are closed.