When the journal of Occupational and Environmental Medicine published a paper, “Maternal occupation and the risk of birth defects: an overview from the National Birth Defects Prevention Study,” Reuters Health was ready to report on it.
But when the published study stated, for example, that “The results of this study indicate that women working as janitors have a significantly increased risk of giving birth to a child with (certain birth defects),” Reuters Health understandably and appropriately asked the authors to provide the absolute numbers.
Reuters Health executive editor Ivan Oransky told me: “I even went as far as doing my own calculations based on some of the tables (in the published study), and having the reporter run it by them to make sure I was getting it right.” Oransky said the author responded: “We do not express our results in these terms. …We do not feel it is prudent to publish the sentence below based on these results.” (The researcher is referring to a sentence that Reuters drafted including the absolute risk calculations that Reuters did on its own.)
So Oransky decided it wasn’t prudent to publish a story on the study.
Good for Reuters Health. This little episode raises several important points in my mind:
Researchers love to get publicity for their work. If they don’t take the time to answer basic statistical questions about their work, then their work shouldn’t be publicized.
This study received federal government support. The authors displayed a pretty smug attitude for folks getting taxpayer support for their work.
Journals should take some responsibility as well. This particular journal published a bold sidebar box labeled, “What This Paper Adds.” The “significantly increased risk” language appears in that box. The journal chose to highlight that finding. Why didn’t the journal demand the absolute numbers? Will that journal do anything about a published author refusing to provide data to a reputable news organization?