Posted by Gary Schwitzer in Health care journalism
Dartmouth’s Steve Woloshin and Lisa Schwartz have studied the pitfalls of news coverage based on scientific meetings. Their new work, in the Medical Journal of Australia, looked at print and broadcast news stories based on research reports at five major scientific meetings on heart disease, AIDS, cancer, neuroscience and radiology.
Results: “34% of the 187 stories did not mention study size, 18% did not mention study design (another 35% were so ambiguous that expert readers had to guess the design), and 40% did not quantify the main result. Only 6% of news stories about animal studies mentioned their limited relevance to human health; 21% of stories about small studies noted problems with the precision of the finding; 10% of stories about uncontrolled studies noted it was not possible to know if the outcome really related to the exposure; and 19% of stories about controlled but not randomised studies raised the possibility of confounding. Only 29% of the 142 news stories on intervention studies noted the possibility of any potential downside. Twelve stories mentioned a corresponding ‘in press’ medical journal article; two of the remaining 175 noted that findings were unpublished, might not have undergone peer review, or might change.”
This is important work, pointing out that “the public may be misled about the validity and relevance of the science presented.”
Comments are closed.