Health News Review

This is a guest column by Ivan Oransky, MD, who is executive editor of Reuters Health and blogs at Embargo Watch and Retraction Watch.

One of the things that makes evaluating medical evidence difficult is knowing whether what’s being published actually reflects reality. Are the studies we read a good representation of scientific truth, or are they full of cherry-picked data that help sell drugs or skew policy decisions?

That question may sound like that of a paranoiac, but rest assured, it’s not. Researchers have worried about a “positive publication bias” for decades. The idea is that studies showing an effect of a particular drug or procedure are more likely to be published. In 2008, for example, a group of researchers published a New England Journal of Medicine study showing that nearly all — or 94% — of published studies of antidepressants used by the FDA to make approval decisions had positive results. But the researchers found that when the FDA included unpublished studies, only about half – or 51% — were positive.

A PLoS Medicine study published that same year found similar results for studies long after drugs were approved: Less than half – 43% – of studies used by the FDA to approve 90 drugs were published within five years of approval. It was those with positive results that were more likely in journals.

All of that can leave the impression that something may work better than it really does. And there is at least one powerful incentive for journals to publish positive studies: Drug and device makers are much more likely to buy reprints of such reports. Such reprints are highly lucrative for journals. As former BMJ editor Richard Smith put it in 2005:

An editor may thus face a frighteningly stark conflict of interest: publish a trial that will bring US$100 000 of profit or meet the end-of-year budget by firing an editor.

The editors of many prominent journals, to their credit, have made it mandatory that study sponsors – often drug companies – register all trials. The idea there is that at least regulators will know how many studies began; if they’re not all published, perhaps the data aren’t as robust as they look. There is at least one journal, the Journal of Negative Results in BioMedicine, dedicated to such findings.

Still, it’s a good assumption that many of these studies never see the light of day in journals. After all, Nature published a letter earlier this month titled “Negative results need airing, too.”

A new study in the Annals of Surgery suggests one place reporters can go look for them: Lower-ranked journals. The authors of the study grouped surgery journals by impact factor – a measure of how often, on average, other studies cite articles in those journals. In the top-ranked journals, 6% of studies were negative or inconclusive, compared to 12% in the middle-tier journals, and 16% of those in the lowest-tier. (Of note: The lowest-ranked journal the researchers looked at was still in the top third of surgery journals overall.)

The authors suggest their results are likely true of more than just surgery journals:

Although these data are based upon analysis of surgical journals, in as much as that group constitutes nearly 18% of indexed medical journals, we believe these data may be applicable to other disciplines.

The findings present a bit of a dilemma for journalists. On the one hand, reporters covering studies should probably stick mostly to the highest-ranked journals, where there is competition to publish, and whose studies other researchers are more likely to read and follow. (Put together with positive publication bias, that competition probably explains some of why negative trials end up in lower-ranked journals.) Journal ranking is one of the criteria I use to decide what to cover at Reuters Health.

And the highly ranked journals did a few things better than their lower-ranked peers: They disclosed conflicts of interest among authors more often, and published more randomized controlled clinical trials, considered by many to be the gold standard of clinical evidence. So there are plenty of reasons to focus on such journals.

But reporters should also want to give their readers, listeners, and viewers a complete picture, and reporting on negative studies could mean dipping into lower-ranked journals. Of course, it’s just one study, and just as medical practice shouldn’t change based on a single report, neither should journalism. Still, based on this Annals of Surgery study, I don’t see any harm in looking at lower-ranked journals periodically, and applying the same other criteria to them that I would to any journal. Even sticking to the top third of such journals, as the authors of this study did, would increase my yield of negative results.

It seems worth testing.

Comments

Others Negative Results Journals posted on February 14, 2011 at 11:50 am

Thank you Gary, good post!
It is rewarding to see how people are more and more interested in publishing negative results. There is a new set of journals focus only on publishing negative results, you’d probably know them: The All Results Journals. Checking their website I found that they are not only focus on biology/medicine but also in chemistry or physics where, I suppose, there are a lot of unpublished negative studies.
I guess there is also another deeper problem here: the researchers who do not want to publish negative results. How to combat this?
Lewis

Gregory D. Pawelski posted on February 14, 2011 at 11:58 am

There is a tendency for major drug manufacturers to hide data about the safety and efficacy of its drugs, and produce data of scant clinical value. Conflicts of interest have thoroughly corrupted American medical research. There are dangerous potential for conflicts of interest when pharmaceutical and other for-profit businesses control the dissemination of findings generated by medical research.
The ability of drug companies to pick and choose the research they provide in support of their products is an outrageous conflict of interest and puts all patients in harm’s way. It can undermine public trust in and support for scientific research, endanger research subjects and patients, and boost medical costs by encouraging doctors and patients to use new treatments that are no better than cheaper alternatives.
Studies with positive findings are more likely to be published than studies with negative results. Even negative results can provide useful information about the effectiveness of treatments. Any tendency to put negative results into a file drawer and forget them can bias reviews of treatments reported in medical literature, making them look more effective than they really are.
With most clinical trials, investigators never give out information as to how people are doing. Most trials are failures with respect to actually improving things. The world doesn’t find out what happen until after a hundred or 500 or 2,000 patients are treated and then only 24 hours before the New England Journal of Medicine publication date.
Having all the information you can gather for the participants and investigators is essential to maintain good doctor-patient communication that is beneficial for cancer patients.
Dangerous drugs have been allowed to reach the market because conflicts of interest have become so endemic in the system of drug evaluation, a trend that has been exacerbated by the rise of for-profit clinical trials, fast-tracking drug approvals, government-industry partnerships, direct consumer advertising and industry-funded salaries for FDA regulators.
The collaborations between academia and industry has clearly brought discernible influence on clinicians, bringing with it erroneous results, suppressed data, or harmful side effects from these drugs.
There is an inherent conflict of interest when organizations provide guidelines for treating disease who receive funding from corporations that benefit financially from those recommended treatments. There is no proof beyond reasonable doubt for any approach to treating cancer today. There is only the bias of clinical investigators as a group and as individuals.
The use of clinical trials to establish prescribing guidelines for evidence-based medicine is highly criticized because such trials have little relevance for the individual patient in the real world, the individuality and uniqueness of each patient. The choice of physicians to intergrate promising insights and methods remains an essential component of quality cancer care.