Health News Review

On Slate.com, Razib Khan wrote, “The Obesity Rate for Children Has Not Plummeted:  Despite what the New York Times tells you.

The Times wasn’t alone in hyping “Obesity Rate for Young Children Plummets 43% in a Decade,” reporting on a study published in the Journal of the American Medical Association.  Dozens and dozens of stories by major news organizations unquestioningly ran with that number.

But the Slate piece points out:

The warning signs are right there in the Times piece, where by the third paragraph the reporter, Sabrina Tavernise, reveals that “About 8 percent of 2- to 5-year-olds were obese in 2012, down from 14 percent in 2004.” The six-percentage-point difference in absolute terms results in the 43 percent relative difference. The Times’ headline blared the relative figure because the absolute drop is just not that impressive.

The report’s closing two sentences are telling: “Overall, there have been no significant changes in obesity prevalence in youth or adults between 2003-2004 and 2011-2012. Obesity prevalence remains high and thus it is important to continue surveillance.” Would you have anticipated such a downbeat conclusion from the newspaper headlines? I doubt it. When evaluating the total sample across age groups, rather than just 2- to 5-year-olds, there hasn’t been any change at all. From the perspective of the researchers themselves, the continuing obesity problem seems to be the most important finding.

On Forbes.com, Geoffrey Kabat also criticized the “much ballyhooed statistic” in “How Credible is CDC’s 43 Percent Decline In Obesity in Young Children.

Summary:  it’s not inaccurate to use relative risk numbers.  It’s just not very helpful for public comprehension.  A 6% absolute difference is still worth talking about.

———————-

Follow us on Twitter:

https://twitter.com/garyschwitzer

https://twitter.com/healthnewsrevu

and on Facebook.

 

Comments

Marilyn Mann posted on March 9, 2014 at 3:24 pm

I don’t disagree with your point but I’m wondering how reliable this subgroup analysis is. If you test enough subgroups, you can have one showing a significant difference, but it may be a statistical fluke. The researchers themselves note that the tests for significance were not adjusted for multiple comparisons and the results of this subgroup analysis should be interpreted with caution.

    Gary Schwitzer posted on March 9, 2014 at 7:42 pm

    Yes, and that was a point made by the author of the Slate piece.