VIDEO: Does diet soda = dementia? A second look at the study uncovers key statistical shortcomings

Michael Joyce produces multimedia at HealthNewsReview.org and tweets as @mlmjoyce

diet sodaJust over a week ago I wrote about how reporters covering a recent observational study — which reported that drinking at least one artificially sweetened beverage daily was associated with an increased risk of developing stroke or dementia — often neglected three key points.

Those points were:

  • relative risk was reported much more often than absolute risk
  • that association does not equal causality was not always mentioned
  • key limitations of the study were rarely included.

But there was one key limitation I did not include that one of our frequent contributors — Dr. Vinay Prasad — jumped on right away. He said that given the huge number of variables in the study (over 90), the authors should have performed what’s known as a “multiple comparisons” adjustment. Without such an adjustment, it’s likely that some results would turn up positive simply due to the play of chance.

This limitation was mentioned by the authors at the very end of the paper:

“… we did not adjust for multiple comparisons meaning that some findings may be attributable to chance”

But it didn’t find its way into any of the news coverage about the study that I saw. 

This is significant because — as you can imagine — any study suggesting an association between drinking diet soda and the risk of dementia and stroke, is going to get massive coverage worldwide.

Which this study did.

Maybe that’s why when Dr. Prasad tweeted his indignation, many in the science community took note, and felt strongly about retweeting such an important drawback of the study.

Perhaps you are like me in that your statistics muscle doesn’t get flexed much. So I needed help to better understand what got Dr. Prasad (and his Twitter followers) all worked up. So I turned to one of our regular contributors,  Dr. Susan Wei , a mathematician and statistician at the University of Minnesota School of Public Health.

Most of us don’t have a natural aptitude for statistics. And I’m not saying we need to run out and take a crash course in it just so we can scrutinize the news.

But here’s something I learned from covering this study. As always, look closely at the limitations of the study. Most authors include them somewhere. And if there’s a single limitation which is unclear, find a source who can translate it for you in terms you and your readers can understand.

In this case social media proved invaluable. Although I wouldn’t consider Twitter a vetted source, it can certainly direct you to some knowledgeable experts.

You might also like

Comments (4)

We Welcome Comments. But please note: We will delete comments left by anyone who doesn’t leave an actual first and last name and an actual email address.

We will delete comments that include personal attacks, unfounded allegations, unverified facts, product pitches, or profanity. We will also end any thread of repetitive comments. Comments should primarily discuss the quality (or lack thereof) in journalism or other media messages about health and medicine. This is not intended to be a forum for definitive discussions about medicine or science. Nor is it a forum to share your personal story about a disease or treatment -- your comment must relate to media messages about health care. If your comment doesn't adhere to these policies, we won't post it. Questions? Please see more on our comments policy.

Paul Alper

May 3, 2017 at 5:00 pm

Once again, Michael Joyce has written a terrific article. However, his comment,
“Most of us don’t have a natural aptitude for statistics. And I’m not saying we need to run out and take a crash course in it just so we can scrutinize the news”
is disheartening to this former statistics teacher. As long as society keeps relegating the subject of statistics to the dull and inscrutable aspects of existence, we will remain prone to misunderstanding quantitative results. Unlike, for example, the difference between relative risk and absolute risk, the dilemma of multiple comparisons is always discussed in an elementary statistics course, even if how best to handle it is complicated.

Andrew DePristo

May 5, 2017 at 7:27 am

I like the simple statistical based argument by Dr. Prasad but such an argument also makes it appear that the only problem with the study is statistical treatment. In my original criticism, I pointed out that the treatment of a major confounding variable, blood pressure, could not be treated properly because of its high hazard ratio, 5 to >30, depending upon exactly how high the BP is (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3838588/). The correction for this would have to be done for each individual as a function of time since BP varies over the duration of the study. I don’t believe that anyone knows how to do such a correction. As such, even if the findings reached statistical significance according to Dr. Prasad’s p<0.0005 value, the correction methods would still be flawed and the evidence incorrect. It is the science that is flawed not just the statistical analysis!

Nina Teicholz

May 8, 2017 at 5:38 am

Excellent job! I would love to see this kind of analysis for the many epidemiological studies that are published. Keep it up!

Doug Mathias

May 8, 2017 at 9:55 am

I agree with Paul Alper’s comment that statistics, studied formally (and probably on a “project” basis rather than a “hard theory” basis in my opinion) should be part of every adult’s ongoing educational curriculum.

I am a bit confused as to why elevated BP should be treated entirely as a confounding variable however. This is not my field, but effective research design must be the key to exposing causality here, and I suspect it is less “the science” than the design. Can anyone enlighten me on this?