Journalists must learn: association doesn't equal causation

Posted By

Tags

Let’s revisit this oft-violated tenet of scientific communication.

Just because an association is established, it doesn’t mean a causal link has been established.

The latest botching of this message was two weeks ago when the Journal of the National Cancer Institute published a study looking at women, how much alcohol they drank and how often they got cancer.

The study’s lead author told The Guardian, “Given that this is the largest study in the world to look at this, it’s clear that even at low levels of alcohol consumption, there does seem to be a very significant increase in cancer risk, and most women are probably not aware of that.”

Several friends wrote to me about news coverage of this study.

It was page one news in the Washington Post, for example, where the headline incorrectly states: “A Drink A Day Raises Women’s Risk of Cancer.”

One Post reader wrote the following to me:

It is a good example of what confuses the public. The Post article overflows with causal language, using phrases such as “increases the risk,” and “may cut the risk” multiple times. The writing is naïve and I am doubtful that he understands the hierarchy of evidence, or the difference between observational and experimental. Unfortunately, the investigators are complicit here, as well. The reporter quotes the investigator saying “increasing your risk,” which is consistent with her language, “Low to moderate alcohol consumption in women increases the risk of certain cancers…” in the abstract of the paper itself.

(The journal article) uses associative language quite a bit as well and I’ll bet the authors understand the difference. Why they slip into causal language, I don’t know. I guess it sounds better and varies the syntax. Or it sells better. I have observed a lot of this in the scientific literature—in JAMA, the NEJM, JNCI and others. This is frustrating because it leads the journalists down that path, wittingly or not. Makes it harder to educate the journalist, if you are having to instruct them that the investigators are sometimes wrong and may mislead them.

Another friend sent me a link to an online article by Patrick Basham and John Luik, ” Women, keep drinking: Why was a flimsy study apparently showing a link between booze and breast cancer so uncritically accepted?” Excerpts:

Allen (the lead author) came across with even scarier news for Americans, telling the Washington Post that the ‘take-home message’ was this: ‘If you are regularly drinking even one drink per day, that’s increasing your risk for cancer [since] there doesn’t seem to be a threshold at which alcohol consumption is safe.’

One can’t help but wonder just what Allen herself has been drinking… After all, her public pronouncements, her recommendations to government, and the reports about her study in the media are certainly not supported by her results.

First, Allen’s study is an observational one, based on data from the UK’s Million Women Study, which is a study about the association between Hormone Replacement Therapy and cancer and heart disease. Allen’s study comes from self-reports about the drinking habits of women in that study.

This means that the study, as an observational study – the weakest kind of epidemiological endeavour and certainly nothing close to the gold standard of a randomised controlled trial – is inherently unable to draw any causal conclusions about a link between drinking and cancer.

Second, the study fails to meet even the most basic requirement of science – that is, being able to validate its measurements – since it is entirely based on the women’s self-reports of their recollection of their drinking. None of these reports was checked and the authors can make no claim about how reliable they are. No one knows how much or how little these women really drank since no one bothered to measure it.

This makes any conclusions based on such ‘evidence’ just a tad dicey. At its foundation, therefore, the study can’t warrant that any of its data about the key fact – the drinking habits of its subjects – is accurate.

Basham and Luik went on to point out that “teetotallers had a higher population incidence of cancer than those consuming up to 14 drinks a week!” And that “of the cancer-drinking correlations examined, virtually none was statistically significant.”

Their conclusion:

What is the real take-home message of this study? Perhaps it should be to avoid drinking policy advice produced by Oxford epidemiologists.

You might also like

Comments

We Welcome Comments. But please note: We will delete comments left by anyone who doesn’t leave an actual first and last name and an actual email address.

We will delete comments that include personal attacks, unfounded allegations, unverified facts, product pitches, or profanity. We will also end any thread of repetitive comments. Comments should primarily discuss the quality (or lack thereof) in journalism or other media messages about health and medicine. This is not intended to be a forum for definitive discussions about medicine or science. Nor is it a forum to share your personal story about a disease or treatment -- your comment must relate to media messages about health care. If your comment doesn't adhere to these policies, we won't post it. Questions? Please see more on our comments policy.

Comments are closed.