Health News Review

Canadian physician-blogger Yoni Freedhoff writes, “What Reading That White Rice and Diabetes Study Actually Told Me.”

He analyzes methodological issues, questions the BMJ publishing the study, then adds:

But that’s not the truly shocking part. This is. The BMJ published an accompanying editorial that rightly called the paper out on its methodological and statistical inadequacies and in conclusion stated,

Although the findings of the current study are interesting they have few immediate implications for doctors, patients, or public health services and cannot support large scale action. Further research is needed to develop and substantiate the research hypothesis

Yet what’s the title and first line of the very same BMJ’s press release regarding this statistically and methodologically weak paper,

White Rice Increases Risk of Type 2 Diabetes

The risk of type 2 diabetes is significantly increased if white rice is eaten regularly, claims a study published today on”

Where the next 6 paragraphs of the press release continue in that same conclusive and important sounding vein only to end with this last line,

In an accompanying editorial, Dr Bruce Neal from the University of Sydney suggests that more, bigger studies are needed to substantiate the research hypothesis that white rice increases the chances of getting type 2 diabetes.

Guess which 7 of 8 press release paragraphs the media paid attention to?

Here are a sampling of actual media headlines as a consequence of the BMJ’s misleading press release,

Eating White Rice Daily Up Diabetes Risk” – CBS News

White Rice ‘Could Cause Diabetes’” – The Independent

Think Twice About Rice? New Study’s Advice” – ABC News

White Rice Raises T2 Diabetes Risk, Claim Academics” – The Daily Telegraph

White Rice Linked to Diabetes Risk” – WebMD

White Rice May Increase Your Risk of Diabetes” – MSNBC

And of course the Twitterverse went crazy too and rice bashing tweets abounded, and from some rather influential tweeps including Dr. Sanjiv Chopra, Dean of Continuing Education at Harvard Medical School, the Harvard School of Public Health itself, journalist Greta Van Susteren, the American Society of Nephrology, the Drudge Report, and many, many, many, many more.

Now I’m not suggesting white rice is a wise food to consume. On the contrary, I generally recommend people try to minimize its consumption, but to be very clear, this study does not in any formative way, shape or form have the strength to draw any conclusions whatsoever on the specific impact white rice has on the risk of developing type 2 diabetes.

And what the hell is up with the BMJ? Publish a paper so weak that you feel the need to co-publish an editorial questioning the paper’s design and conclusions and then simultaneously put out a press release that in turn purposely glosses over and misinforms the media about the paper’s weak conclusions?

The Behind the Headlines site in the UK published its usual thoughtful analysis of the study, “Rice ‘diabetes risk’ overstated.” Excerpt:

Although the review has found an association, it cannot prove that white rice itself directly causes type 2 diabetes, as there are many other factors that could affect the risk of developing the condition (such as physical activity, alcohol and obesity).

Kevin Lomangino, one of our story reviewers on, reminds us of another recent example of problems with a BMJ news release on an observational study – that time regarding chocolate.



shaun nerbas posted on March 26, 2012 at 1:00 pm

Keep asking these questions about the amount and quality of the evidence some of these wild health head lines contain. This sensationalized heath reporting sounds like the same kind of quality stories reported about teen idols and the Academy Awards; it’s speculative, it gets your attention, but it is often misleading and possibly quite wrong.
Chocolate, rice, aspirin, and coffee sales are affected by this stuff. It leaves a lot of desperate people open to financial exploitation.

Trish Groves posted on March 27, 2012 at 9:40 am

I’d just like to add a bit of context to this post – which seems to me at least as sensationalist as the story it’s criticising.

It’s odd how commentators are often so negative about well conducted, cautiously interpreted observational studies. There still seems to be a prevailing view that if it ain’t a randomised controlled trial (RCT) it ain’t science – which is nonsense unless you’re assessing the effects of an intervention/diagnostic test/screening programme.

The 2011 Levels of Evidence table produced by the Oxford Centre for Evidence Based Medicine states clearly that for prognosis studies (including those looking at risk), the highest level of evidence is a systematic review of cohort studies, followed by individual cohort studies:

Nobody’s going to do an RCT to see whether eating white rice gives you diabetes, not least because it’s the wrong study design: a trial would have to be enormous and very long and the participants would, as always in trials, not be representative or reflect what happens in real life.

The BMJ study (whose full text is available here to all, with open access adjusted as much as it could for confounding factors and the paper included a cautious discussion about possible residual confounding:

So far its 9 Rapid Responses (eletters – again, all openly accessible) have been supportive of the paper:
but we also welcome, of course, responses that critique the study.

There were two other criticisms:
1. that the BMJ’s press release was wrong ( While it wasn’t actually wrong – as the study indeed shows that eating white rice is associated with higher risk of diabetes – it’s fair to say that it was probably too brief because it left the paper’s authors and the editorialist to discuss the caveats about the study. Every piece of science comes with caveats and at the BMJ we work hard to make sure they’re clearly discussed in our articles.

2. that the accompanying editorial in the BMJ demolished the paper. The editorial, which the BMJ commissioned – as always – with an open brief to set the paper in context, did indeed criticise the paper and highlight the caveats that the authors had already discussed in their paper. What’s wrong with that? Would it have been better for the BMJ to say “actually, we won’t publish your editorial because you don’t fully support the paper’s findings?

Trish Groves

Dr Trish Groves, deputy editor BMJ
(Competing interest: I head up the BMJ team of research editors, manage the overall peer review process, and chair the weekly committee that decides which papers to publish)

    Gary Schwitzer posted on March 27, 2012 at 10:20 am


    Thanks for your note.

    Our site/project focuses on the improvement of health care journalism. In so doing, we often reflect on the potential impact of news releases. That was the particular angle in Yoni Freedhoff’s post that caught my eye.

    Let me be clear about my own position: I am not “negative about well conducted, cautiously interpreted observational studies.”

    I don’t have “a prevailing view that if it ain’t a randomised controlled trial (RCT) it ain’t science.”

    I do believe, though, that it is misleading for a news release to use phrases such as “increases risk” in describing an observational study – without including any discussion about “association versus causation.” We go to great lengths – every day – to try to help journalists improve their work. On this issue, we have posted a primer, “Does The Language Fit the Evidence? – Association Versus Causation.

    We wish more medical journals would join us more often in the direct attempt to help improve health care journalism and the flow of information to news consumers and health care consumers.

    To try to help journalists use accurate language is not the same as being “negative about well conducted, cautiously interpreted observational studies.”

      Benjamin Cairns posted on March 27, 2012 at 12:17 pm


      Your project is an outstanding effort to address poor reporting of health news, and I follow it because it has as many lessons for scientists as it does for journalists. Typically, the advice given in the primer on reporting observational studies is excellent.

      At face value, however, the primer does elevate RCTs to a level of authority that they rarely reach in practice. I accept that this is not your personal view, but it is the impression given by that article.

      All types of trials have their strengths and weaknesses, and all are intended to provide statistical evidence relating to a hypothesis. Positive results from a randomised controlled trial, if it is large, well-designed and well-run, are much more reliable, because uncertainties in results can be attributed to chance rather than bias. But in practice it is often an open question whether the intervention is “the only difference” between the treatment and control groups in an RCT.

      These details can no more be ignored by health writers than can the “nuances” of observational studies. Even in the best-case scenario, chance and serious related issues like publication bias cannot be entirely dismissed when reporting RCT results. I get the impression from the article that RCTs are almost above the maxim that correlation does not imply causation. Of course, they are not. We may decide that, with sufficient data from good RCTs, the observed association is very unlikely to be due to chance or bias, but this is not (to my mind) the same as “demonstrating cause and effect”.

      Trish has already described situations where well-run observational studies provide the highest standard of evidence. It is therefore incorrect to state that observational studies can only provide evidence that “a stronger design could explore further.” The description of the possibility of residual confounding as “inevitable” in observational studies is strictly accurate, but hardly precise; the magnitude of potential bias due to confounding varies greatly in observational studies (as it may in less-than-perfect RCTs). Not every interpretation of an observational study is “speculative at best”.

      The primer seems to give RCTs a free ride; how often can a health writer be truly confident that this exception is justified?

      (My conflict of interest is that I work on large prospective studies, mostly on topics that cannot easily be investigated using RCTs.)

      Gary Schwitzer posted on March 27, 2012 at 2:10 pm


      Thanks for your kind words and for your constructive comments, with which I agree.

      The very first “tip for understanding studies” that we offer in our online toolkit is that interested parties get a copy of our guide, “Covering Medical Research: A Guide for Reporting on Studies.”

      In it, we include this advice from Jack Fowler, PhD, who wrote the book, Survey Research Methods, and who urges journalists not to make sweeping conclusions about the value of a study just because of its place on a pyramid or hierarchy of evidence. He emphasized:

      a) Internal validity. Are the conclusions about the effects (or lack thereof) caused by some intervention (drug, treatment, test, operation) accurate? The most crucial issue with respect to these conclusions is usually the comparison group. Is there one? In the ideal, the comparison group is just like the treatment group except that it did not receive the treatment. The great strength of randomized trials is that, when carried out properly, they create two or more groups that are essentially the same. However, case control studies and cohort studies also try to create comparison groups, and a critical question about their quality is how well they succeeded in finding a group that was plausibly similar and, if there were limitations to the comparability, is it reasonable that statistical adjustments could be made to compensate for the differences. Finally, one needs information that the intervention was delivered as expected (the people took the pills, those assigned to surgery or acupuncture actually showed up and got the treatment they were assigned to get – and it was delivered as scripted).

      b) External validity. To what extent can we generalize the observed results to other populations and situations? This is obviously the big question about animal studies. Also, many trials limit subjects to those in certain age groups, or those without other conditions. It is important to note those criteria, because either the efficacy or the complication rates might be different for those with different ages, or those with other health problems, on whom physicians might want to use the treatments/tests/meds.

      c) Implications for a pyramid. There are two problems with the pyramid notion. First, the excellence of different kinds of studies is not necessarily the same for both validity issues. Randomized trials usually are strongest with respect to internal validity, but it is common for them to be weak on external validity. First, the subject eligibility rules are limiting. For many trials, many people are not willing to be randomized. Those who are willing might be different from others in important ways. In contrast, studies that follow cohorts often are particularly strong because they include more representative samples of patients. Their problem is likely to be getting a good comparison group. And meta-analyses – at the top of the pyramid – are no better than the trials available to examine. Second, it follows that how a particular study is designed and executed in ways that affect the validity issues is as important as the particular kind of design that is used. Some cohort studies are better than some randomized trials when looked at as a whole. Ordering by type of design alone may send a message that is too strong.

      This was our attempt to address the point you made succinctly, that “All types of trials have their strengths and weaknesses.”

    Ivan Oransky posted on March 27, 2012 at 1:16 pm

    Great discussion, and I think it’s terrific that someone from BMJ is engaged here, as Kevin Lomangino noted on Twitter.

    We cover plenty of observational studies at Reuters Health. When it comes to diet and lifestyle choices, limiting ourselves to RCTs would make for an extremely thin gruel, one that would keep our clients pretty hungry. But we do our best to cover them with the sorts of caveats that champions, and we don’t use cause-effect language.

    Speaking of which, I’m still concerned about the BMJ press release that accompanied this study. I have to take exception with Trish’s comment that “White rice increases risk of type 2 diabetes” isn’t “actually wrong.” It is. You simply can’t make that kind of claim based on the study at hand.

    To quote Gary, I’d look to journals to “help journalists use accurate language.” This press release didn’t do that, and I’ll be sure to keep it in mind next time I hear a journal blaming reporters for not letting nuance get in the way of a good story. We all need to take responsibility.

    Ivan Oransky, MD
    Executive Editor
    Reuters Health

Yoni Freedhoff posted on March 27, 2012 at 10:14 am

Dr. Groves,

Thrilled that you’re engaged in this discussion.

While you’re certainly correct that we’re unlikely to see a study design specifically addressing white rice, the expectation of anyone who cares about observational research would be that a meta-analysis would ensure studies were of sufficient caliber to draw conclusions. In the case of this white rice paper, that simply wasn’t the case as the cohorts weren’t controlled for the most basic of confounders such as the consumption of other types of refined carbohydrates.

While I’m personally baffled that your weekly committee and peer reviewers took no issue with the study’s lack of conclusive power, something that clearly your invited editorialist immediately recognized, even when taken at face value the study concludes that the impact of white rice on diabetes risk is only significant to Asian populations. Yet that fact was omitted from the press release – an omission that’s far too serious to be attributable to brevity.

Lastly, the fact that there have been positive rapid responses means nothing of course and is neither here nor there as part of this discussion.


Trish Groves posted on March 27, 2012 at 10:41 am

Thanks, Gary; glad to debate the challenges of reporting observational studies – here and on Twitter (I just tweeted the link again via @trished).

Yoni – for what it’s worth, the white rice paper went through the BMJ’s full peer review process and was reviewed by two independent epidemiologists with expertise in nutritional research plus the BMJ’s consulting epidemiology editor and an Oxford-based statistics editor – as well as the full team of research editors. The authors revised their paper before acceptance, in response to comments from all of the above.

But your criticisms are important Yoni – I do hope you’ll post a rapid response on to share them with our readers, now and in the future. Rapid responses provide important postpublication peer review and remain attached to the article. Anyone reading the paper at any time will be able to see and respond to your comments. Perhaps most importantly, so will the authors of the paper.


Beth Kitchin, PhD, RD posted on March 27, 2012 at 3:46 pm

While I agree that we should not disregard the findings of observational studies, I agree that the press release from BMJ uses misleading and inaccurate language. Also, I would like to see the absolute risk reported. I fear that the exaggeration of study results leads to the public’s distrust of our advice. I can’t tell you how often I hear “one day you all say eggs are bad, then they’re good, then they’re bad again”. We need to dedicate ourselves to reporting data accurately or we risk losing the trust of the public we are trying to serve.