2013 is off to a tough start for my family with my 92-year old Dad being hospitalized over the holidays with flu and pneumonia, precipitating a move to an assisted living facility for him and increasing caregiving issues for us. So the volume of publishing on this site may be down a bit. I may be late in reacting to some things. And, in this case, I was unavailable when several journalists tried to reach me to comment prior to deadline about the following study/story.
The study was published in the Annals of Oncology with the title, “Bias in reporting of end points of efficacy and toxicity in randomized, clinical trials for women with breast cancer.“
Results & conclusion, in short:
Of 164 included trials, 33% showed bias in reporting of the primary endpoint and 67% in the reporting of toxicity.
Bias in reporting of outcome is common for studies with negative primary endpoints. Reporting of toxicity is poor, especially for studies with positive primary endpoints.
Some journalists, who had more time than I did yesterday to delve into the details, reported on it.
Reuters reported,”Cancer studies often downplay chemo side effects.” Excerpt:
Doctors relying on studies published in top journals for guidance about how to treat women with breast cancer may not be getting the most accurate information, according to a new analysis.
“Investigators want to go overboard to make their studies look positive,” said Dr. Ian Tannock, the senior author of the new study in the Annals of Oncology.
In two-thirds of the 164 studies Tannock and his colleagues scrutinized, that meant not listing toxicities – in other words, serious side effects, whether of chemotherapy, radiation or surgery – in the paper’s abstract. Such abstracts summarize the findings, and run a few hundred words.
That’s important, said Tannock, of Princess Margaret Hospital in Toronto, because “most of us are so damn busy, we only read the abstract and skim the tables and figures.”
In fact, a fifth of studies didn’t include toxicities in results tables, and about a third failed to mention them in either the abstract or the discussion section.
Most surprising, said Tannock, was that in a third of studies, if the treatment didn’t work as well as one might hope, researchers moved the goalposts, reporting results that weren’t what the study was originally designed to test.
The Canadian Press reported, “Positive spin put on many drug trials despite results, study finds.” Excerpts:
“Sometimes studies that are basically negative studies are a little bit dressed up to look as though they may be positive,” said Dr. Ian Tannock, a medical oncologist who led the study published this week in the journal Annals of Oncology.
“It’s like the politicians. Trying to make things look better than they are.” …
“Some physicians may be persuaded to use a new or different treatment than the standard because they have read papers that suggest that this is a better treatment when it may not really be a better treatment,” he said.
As is known by any frequent visitor to our site, these are recurring themes as we evaluate health care news stories about studies.
When we wrote, “Covering Medical Research: A Guide To Reporting on Studies,” for members of the Association of Health Care Journalists, the first line was:
This guide is intended to help you analyze and write about health and medical research studies. Its publication, however, should not be construed as an invitation to favor this kind of reporting over more enterprising stories that weave in policy and other issues.
Some people who follow research news closely believe that too much media attention is given to reporting on studies – that such a diet of news provides a distorted picture of the health care reality. We’ll investigate some of the factors behind this argument in greater detail.
One journalist on Twitter today wrote: “The real answer is fewer study-based stories. We need to break the cycle.”
Follow us on Facebook, and on Twitter: