Canadian physician-blogger Yoni Freedhoff writes, “What Reading That White Rice and Diabetes Study Actually Told Me.”
He analyzes methodological issues, questions the BMJ publishing the study, then adds:
But that’s not the truly shocking part. This is. The BMJ published an accompanying editorial that rightly called the paper out on its methodological and statistical inadequacies and in conclusion stated,
“Although the findings of the current study are interesting they have few immediate implications for doctors, patients, or public health services and cannot support large scale action. Further research is needed to develop and substantiate the research hypothesis“
Yet what’s the title and first line of the very same BMJ’s press release regarding this statistically and methodologically weak paper,
“White Rice Increases Risk of Type 2 Diabetes
The risk of type 2 diabetes is significantly increased if white rice is eaten regularly, claims a study published today on bmj.com.”
Where the next 6 paragraphs of the press release continue in that same conclusive and important sounding vein only to end with this last line,
“In an accompanying editorial, Dr Bruce Neal from the University of Sydney suggests that more, bigger studies are needed to substantiate the research hypothesis that white rice increases the chances of getting type 2 diabetes.“
Guess which 7 of 8 press release paragraphs the media paid attention to?
Here are a sampling of actual media headlines as a consequence of the BMJ’s misleading press release,
“Eating White Rice Daily Up Diabetes Risk” – CBS News
“White Rice ‘Could Cause Diabetes’” – The Independent
“Think Twice About Rice? New Study’s Advice” – ABC News
“White Rice Raises T2 Diabetes Risk, Claim Academics” – The Daily Telegraph
“White Rice Linked to Diabetes Risk” – WebMD
“White Rice May Increase Your Risk of Diabetes” – MSNBC
And of course the Twitterverse went crazy too and rice bashing tweets abounded, and from some rather influential tweeps including Dr. Sanjiv Chopra, Dean of Continuing Education at Harvard Medical School, the Harvard School of Public Health itself, journalist Greta Van Susteren, the American Society of Nephrology, the Drudge Report, and many, many, many, many more.
Now I’m not suggesting white rice is a wise food to consume. On the contrary, I generally recommend people try to minimize its consumption, but to be very clear, this study does not in any formative way, shape or form have the strength to draw any conclusions whatsoever on the specific impact white rice has on the risk of developing type 2 diabetes.
And what the hell is up with the BMJ? Publish a paper so weak that you feel the need to co-publish an editorial questioning the paper’s design and conclusions and then simultaneously put out a press release that in turn purposely glosses over and misinforms the media about the paper’s weak conclusions?
The Behind the Headlines site in the UK published its usual thoughtful analysis of the study, “Rice ‘diabetes risk’ overstated.” Excerpt:
Although the review has found an association, it cannot prove that white rice itself directly causes type 2 diabetes, as there are many other factors that could affect the risk of developing the condition (such as physical activity, alcohol and obesity).
Kevin Lomangino, one of our story reviewers on HealthNewsReview.org, reminds us of another recent example of problems with a BMJ news release on an observational study – that time regarding chocolate.
Comments (18)
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
shaun nerbas
March 26, 2012 at 1:00 pmKeep asking these questions about the amount and quality of the evidence some of these wild health head lines contain. This sensationalized heath reporting sounds like the same kind of quality stories reported about teen idols and the Academy Awards; it’s speculative, it gets your attention, but it is often misleading and possibly quite wrong.
Chocolate, rice, aspirin, and coffee sales are affected by this stuff. It leaves a lot of desperate people open to financial exploitation.
shaun nerbas
March 26, 2012 at 1:00 pmKeep asking these questions about the amount and quality of the evidence some of these wild health head lines contain. This sensationalized heath reporting sounds like the same kind of quality stories reported about teen idols and the Academy Awards; it’s speculative, it gets your attention, but it is often misleading and possibly quite wrong.
Chocolate, rice, aspirin, and coffee sales are affected by this stuff. It leaves a lot of desperate people open to financial exploitation.
Trish Groves
March 27, 2012 at 9:40 amI’d just like to add a bit of context to this post – which seems to me at least as sensationalist as the story it’s criticising.
It’s odd how commentators are often so negative about well conducted, cautiously interpreted observational studies. There still seems to be a prevailing view that if it ain’t a randomised controlled trial (RCT) it ain’t science – which is nonsense unless you’re assessing the effects of an intervention/diagnostic test/screening programme.
The 2011 Levels of Evidence table produced by the Oxford Centre for Evidence Based Medicine states clearly that for prognosis studies (including those looking at risk), the highest level of evidence is a systematic review of cohort studies, followed by individual cohort studies:
http://www.cebm.net/mod_product/design/files/CEBM-Levels-of-Evidence-2.1.pdf
Nobody’s going to do an RCT to see whether eating white rice gives you diabetes, not least because it’s the wrong study design: a trial would have to be enormous and very long and the participants would, as always in trials, not be representative or reflect what happens in real life.
The BMJ study (whose full text is available here to all, with open access http://www.bmj.com/content/344/bmj.e1454) adjusted as much as it could for confounding factors and the paper included a cautious discussion about possible residual confounding:
http://www.bmj.com/content/344/bmj.e1454
So far its 9 Rapid Responses (eletters – again, all openly accessible) have been supportive of the paper:
http://www.bmj.com/content/344/bmj.e1454?tab=responses
but we also welcome, of course, responses that critique the study.
There were two other criticisms:
1. that the BMJ’s press release was wrong (http://www.bmj.com/press-releases/2012/03/15/white-rice-increases-risk-type-2-diabetes). While it wasn’t actually wrong – as the study indeed shows that eating white rice is associated with higher risk of diabetes – it’s fair to say that it was probably too brief because it left the paper’s authors and the editorialist to discuss the caveats about the study. Every piece of science comes with caveats and at the BMJ we work hard to make sure they’re clearly discussed in our articles.
2. that the accompanying editorial in the BMJ demolished the paper. The editorial, which the BMJ commissioned – as always – with an open brief to set the paper in context, did indeed criticise the paper and highlight the caveats that the authors had already discussed in their paper. What’s wrong with that? Would it have been better for the BMJ to say “actually, we won’t publish your editorial because you don’t fully support the paper’s findings?
Trish Groves
Dr Trish Groves, deputy editor BMJ
(Competing interest: I head up the BMJ team of research editors, manage the overall peer review process, and chair the weekly committee that decides which papers to publish)
Gary Schwitzer
March 27, 2012 at 10:20 amTrish,
Thanks for your note.
Our site/project focuses on the improvement of health care journalism. In so doing, we often reflect on the potential impact of news releases. That was the particular angle in Yoni Freedhoff’s post that caught my eye.
Let me be clear about my own position: I am not “negative about well conducted, cautiously interpreted observational studies.”
I don’t have “a prevailing view that if it ain’t a randomised controlled trial (RCT) it ain’t science.”
I do believe, though, that it is misleading for a news release to use phrases such as “increases risk” in describing an observational study – without including any discussion about “association versus causation.” We go to great lengths – every day – to try to help journalists improve their work. On this issue, we have posted a primer, “Does The Language Fit the Evidence? – Association Versus Causation.”
We wish more medical journals would join us more often in the direct attempt to help improve health care journalism and the flow of information to news consumers and health care consumers.
To try to help journalists use accurate language is not the same as being “negative about well conducted, cautiously interpreted observational studies.”
Benjamin Cairns
March 27, 2012 at 12:17 pmGary,
Your project is an outstanding effort to address poor reporting of health news, and I follow it because it has as many lessons for scientists as it does for journalists. Typically, the advice given in the primer on reporting observational studies is excellent.
At face value, however, the primer does elevate RCTs to a level of authority that they rarely reach in practice. I accept that this is not your personal view, but it is the impression given by that article.
All types of trials have their strengths and weaknesses, and all are intended to provide statistical evidence relating to a hypothesis. Positive results from a randomised controlled trial, if it is large, well-designed and well-run, are much more reliable, because uncertainties in results can be attributed to chance rather than bias. But in practice it is often an open question whether the intervention is “the only difference” between the treatment and control groups in an RCT.
These details can no more be ignored by health writers than can the “nuances” of observational studies. Even in the best-case scenario, chance and serious related issues like publication bias cannot be entirely dismissed when reporting RCT results. I get the impression from the article that RCTs are almost above the maxim that correlation does not imply causation. Of course, they are not. We may decide that, with sufficient data from good RCTs, the observed association is very unlikely to be due to chance or bias, but this is not (to my mind) the same as “demonstrating cause and effect”.
Trish has already described situations where well-run observational studies provide the highest standard of evidence. It is therefore incorrect to state that observational studies can only provide evidence that “a stronger design could explore further.” The description of the possibility of residual confounding as “inevitable” in observational studies is strictly accurate, but hardly precise; the magnitude of potential bias due to confounding varies greatly in observational studies (as it may in less-than-perfect RCTs). Not every interpretation of an observational study is “speculative at best”.
The primer seems to give RCTs a free ride; how often can a health writer be truly confident that this exception is justified?
(My conflict of interest is that I work on large prospective studies, mostly on topics that cannot easily be investigated using RCTs.)
Gary Schwitzer
March 27, 2012 at 2:10 pmBenjamin,
Thanks for your kind words and for your constructive comments, with which I agree.
The very first “tip for understanding studies” that we offer in our online toolkit is that interested parties get a copy of our guide, “Covering Medical Research: A Guide for Reporting on Studies.”
In it, we include this advice from Jack Fowler, PhD, who wrote the book, Survey Research Methods, and who urges journalists not to make sweeping conclusions about the value of a study just because of its place on a pyramid or hierarchy of evidence. He emphasized:
This was our attempt to address the point you made succinctly, that “All types of trials have their strengths and weaknesses.”
Ivan Oransky
March 27, 2012 at 1:16 pmGreat discussion, and I think it’s terrific that someone from BMJ is engaged here, as Kevin Lomangino noted on Twitter.
We cover plenty of observational studies at Reuters Health. When it comes to diet and lifestyle choices, limiting ourselves to RCTs would make for an extremely thin gruel, one that would keep our clients pretty hungry. But we do our best to cover them with the sorts of caveats that HealthNewsReview.org champions, and we don’t use cause-effect language.
Speaking of which, I’m still concerned about the BMJ press release that accompanied this study. I have to take exception with Trish’s comment that “White rice increases risk of type 2 diabetes” isn’t “actually wrong.” It is. You simply can’t make that kind of claim based on the study at hand.
To quote Gary, I’d look to journals to “help journalists use accurate language.” This press release didn’t do that, and I’ll be sure to keep it in mind next time I hear a journal blaming reporters for not letting nuance get in the way of a good story. We all need to take responsibility.
Ivan Oransky, MD
Executive Editor
Reuters Health
Trish Groves
March 27, 2012 at 9:40 amI’d just like to add a bit of context to this post – which seems to me at least as sensationalist as the story it’s criticising.
It’s odd how commentators are often so negative about well conducted, cautiously interpreted observational studies. There still seems to be a prevailing view that if it ain’t a randomised controlled trial (RCT) it ain’t science – which is nonsense unless you’re assessing the effects of an intervention/diagnostic test/screening programme.
The 2011 Levels of Evidence table produced by the Oxford Centre for Evidence Based Medicine states clearly that for prognosis studies (including those looking at risk), the highest level of evidence is a systematic review of cohort studies, followed by individual cohort studies:
http://www.cebm.net/mod_product/design/files/CEBM-Levels-of-Evidence-2.1.pdf
Nobody’s going to do an RCT to see whether eating white rice gives you diabetes, not least because it’s the wrong study design: a trial would have to be enormous and very long and the participants would, as always in trials, not be representative or reflect what happens in real life.
The BMJ study (whose full text is available here to all, with open access http://www.bmj.com/content/344/bmj.e1454) adjusted as much as it could for confounding factors and the paper included a cautious discussion about possible residual confounding:
http://www.bmj.com/content/344/bmj.e1454
So far its 9 Rapid Responses (eletters – again, all openly accessible) have been supportive of the paper:
http://www.bmj.com/content/344/bmj.e1454?tab=responses
but we also welcome, of course, responses that critique the study.
There were two other criticisms:
1. that the BMJ’s press release was wrong (http://www.bmj.com/press-releases/2012/03/15/white-rice-increases-risk-type-2-diabetes). While it wasn’t actually wrong – as the study indeed shows that eating white rice is associated with higher risk of diabetes – it’s fair to say that it was probably too brief because it left the paper’s authors and the editorialist to discuss the caveats about the study. Every piece of science comes with caveats and at the BMJ we work hard to make sure they’re clearly discussed in our articles.
2. that the accompanying editorial in the BMJ demolished the paper. The editorial, which the BMJ commissioned – as always – with an open brief to set the paper in context, did indeed criticise the paper and highlight the caveats that the authors had already discussed in their paper. What’s wrong with that? Would it have been better for the BMJ to say “actually, we won’t publish your editorial because you don’t fully support the paper’s findings?
Trish Groves
Dr Trish Groves, deputy editor BMJ
(Competing interest: I head up the BMJ team of research editors, manage the overall peer review process, and chair the weekly committee that decides which papers to publish)
Gary Schwitzer
March 27, 2012 at 10:20 amTrish,
Thanks for your note.
Our site/project focuses on the improvement of health care journalism. In so doing, we often reflect on the potential impact of news releases. That was the particular angle in Yoni Freedhoff’s post that caught my eye.
Let me be clear about my own position: I am not “negative about well conducted, cautiously interpreted observational studies.”
I don’t have “a prevailing view that if it ain’t a randomised controlled trial (RCT) it ain’t science.”
I do believe, though, that it is misleading for a news release to use phrases such as “increases risk” in describing an observational study – without including any discussion about “association versus causation.” We go to great lengths – every day – to try to help journalists improve their work. On this issue, we have posted a primer, “Does The Language Fit the Evidence? – Association Versus Causation.”
We wish more medical journals would join us more often in the direct attempt to help improve health care journalism and the flow of information to news consumers and health care consumers.
To try to help journalists use accurate language is not the same as being “negative about well conducted, cautiously interpreted observational studies.”
Benjamin Cairns
March 27, 2012 at 12:17 pmGary,
Your project is an outstanding effort to address poor reporting of health news, and I follow it because it has as many lessons for scientists as it does for journalists. Typically, the advice given in the primer on reporting observational studies is excellent.
At face value, however, the primer does elevate RCTs to a level of authority that they rarely reach in practice. I accept that this is not your personal view, but it is the impression given by that article.
All types of trials have their strengths and weaknesses, and all are intended to provide statistical evidence relating to a hypothesis. Positive results from a randomised controlled trial, if it is large, well-designed and well-run, are much more reliable, because uncertainties in results can be attributed to chance rather than bias. But in practice it is often an open question whether the intervention is “the only difference” between the treatment and control groups in an RCT.
These details can no more be ignored by health writers than can the “nuances” of observational studies. Even in the best-case scenario, chance and serious related issues like publication bias cannot be entirely dismissed when reporting RCT results. I get the impression from the article that RCTs are almost above the maxim that correlation does not imply causation. Of course, they are not. We may decide that, with sufficient data from good RCTs, the observed association is very unlikely to be due to chance or bias, but this is not (to my mind) the same as “demonstrating cause and effect”.
Trish has already described situations where well-run observational studies provide the highest standard of evidence. It is therefore incorrect to state that observational studies can only provide evidence that “a stronger design could explore further.” The description of the possibility of residual confounding as “inevitable” in observational studies is strictly accurate, but hardly precise; the magnitude of potential bias due to confounding varies greatly in observational studies (as it may in less-than-perfect RCTs). Not every interpretation of an observational study is “speculative at best”.
The primer seems to give RCTs a free ride; how often can a health writer be truly confident that this exception is justified?
(My conflict of interest is that I work on large prospective studies, mostly on topics that cannot easily be investigated using RCTs.)
Ivan Oransky
March 27, 2012 at 1:16 pmGreat discussion, and I think it’s terrific that someone from BMJ is engaged here, as Kevin Lomangino noted on Twitter.
We cover plenty of observational studies at Reuters Health. When it comes to diet and lifestyle choices, limiting ourselves to RCTs would make for an extremely thin gruel, one that would keep our clients pretty hungry. But we do our best to cover them with the sorts of caveats that HealthNewsReview.org champions, and we don’t use cause-effect language.
Speaking of which, I’m still concerned about the BMJ press release that accompanied this study. I have to take exception with Trish’s comment that “White rice increases risk of type 2 diabetes” isn’t “actually wrong.” It is. You simply can’t make that kind of claim based on the study at hand.
To quote Gary, I’d look to journals to “help journalists use accurate language.” This press release didn’t do that, and I’ll be sure to keep it in mind next time I hear a journal blaming reporters for not letting nuance get in the way of a good story. We all need to take responsibility.
Ivan Oransky, MD
Executive Editor
Reuters Health
Yoni Freedhoff
March 27, 2012 at 10:14 amDr. Groves,
Thrilled that you’re engaged in this discussion.
While you’re certainly correct that we’re unlikely to see a study design specifically addressing white rice, the expectation of anyone who cares about observational research would be that a meta-analysis would ensure studies were of sufficient caliber to draw conclusions. In the case of this white rice paper, that simply wasn’t the case as the cohorts weren’t controlled for the most basic of confounders such as the consumption of other types of refined carbohydrates.
While I’m personally baffled that your weekly committee and peer reviewers took no issue with the study’s lack of conclusive power, something that clearly your invited editorialist immediately recognized, even when taken at face value the study concludes that the impact of white rice on diabetes risk is only significant to Asian populations. Yet that fact was omitted from the press release – an omission that’s far too serious to be attributable to brevity.
Lastly, the fact that there have been positive rapid responses means nothing of course and is neither here nor there as part of this discussion.
Sincerely,
Yoni
Yoni Freedhoff
March 27, 2012 at 10:14 amDr. Groves,
Thrilled that you’re engaged in this discussion.
While you’re certainly correct that we’re unlikely to see a study design specifically addressing white rice, the expectation of anyone who cares about observational research would be that a meta-analysis would ensure studies were of sufficient caliber to draw conclusions. In the case of this white rice paper, that simply wasn’t the case as the cohorts weren’t controlled for the most basic of confounders such as the consumption of other types of refined carbohydrates.
While I’m personally baffled that your weekly committee and peer reviewers took no issue with the study’s lack of conclusive power, something that clearly your invited editorialist immediately recognized, even when taken at face value the study concludes that the impact of white rice on diabetes risk is only significant to Asian populations. Yet that fact was omitted from the press release – an omission that’s far too serious to be attributable to brevity.
Lastly, the fact that there have been positive rapid responses means nothing of course and is neither here nor there as part of this discussion.
Sincerely,
Yoni
Trish Groves
March 27, 2012 at 10:41 amThanks, Gary; glad to debate the challenges of reporting observational studies – here and on Twitter (I just tweeted the link again via @trished).
Yoni – for what it’s worth, the white rice paper went through the BMJ’s full peer review process and was reviewed by two independent epidemiologists with expertise in nutritional research plus the BMJ’s consulting epidemiology editor and an Oxford-based statistics editor – as well as the full team of research editors. The authors revised their paper before acceptance, in response to comments from all of the above.
But your criticisms are important Yoni – I do hope you’ll post a rapid response on bmj.com to share them with our readers, now and in the future. Rapid responses provide important postpublication peer review and remain attached to the article. Anyone reading the paper at any time will be able to see and respond to your comments. Perhaps most importantly, so will the authors of the paper.
Trish
Trish Groves
March 27, 2012 at 10:41 amThanks, Gary; glad to debate the challenges of reporting observational studies – here and on Twitter (I just tweeted the link again via @trished).
Yoni – for what it’s worth, the white rice paper went through the BMJ’s full peer review process and was reviewed by two independent epidemiologists with expertise in nutritional research plus the BMJ’s consulting epidemiology editor and an Oxford-based statistics editor – as well as the full team of research editors. The authors revised their paper before acceptance, in response to comments from all of the above.
But your criticisms are important Yoni – I do hope you’ll post a rapid response on bmj.com to share them with our readers, now and in the future. Rapid responses provide important postpublication peer review and remain attached to the article. Anyone reading the paper at any time will be able to see and respond to your comments. Perhaps most importantly, so will the authors of the paper.
Trish
Beth Kitchin, PhD, RD
March 27, 2012 at 3:46 pmWhile I agree that we should not disregard the findings of observational studies, I agree that the press release from BMJ uses misleading and inaccurate language. Also, I would like to see the absolute risk reported. I fear that the exaggeration of study results leads to the public’s distrust of our advice. I can’t tell you how often I hear “one day you all say eggs are bad, then they’re good, then they’re bad again”. We need to dedicate ourselves to reporting data accurately or we risk losing the trust of the public we are trying to serve.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like