It’s difficult to make a case for hiding or obscuring information about health and the medicines we take, but it seems the editors of two top medical journals are doing just that. The decisions of these editors substantially affect the quality of medical research studies reported, what public relations officials communicate about those studies, and what news stories eventually say about the research to patients and the public.
Annals — outcomes bait and switch
A basic tenet of clinical trials, of Clinical Trials 101 so to speak, it to state ahead of time, in writing, and publicly, in print or on a website, what it is you are testing. What is your endpoint or outcome measure? Is it number of deaths, heart attacks, blood glucose level, depression score? At 3 months, 6 months, one year? Stating your outcome measure — that is, fixing your goalpost ahead of time–keeps people from cheating (being biased, whether consciously or not)—from changing outcomes as the trial goes along to match what the data might be telling them. Professionals run these trials; they know this and there are trial registries where they can document their planned outcomes. Hence the concern with the following pronouncement from the helm of the Annals of Internal Medicine:
“On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong.”
Of course not even journal editors’ “long experience” can trump decades of statistical methods. Unfortunately, this attitude has contributed to the contamination and mistrust of the medical literature as detailed by Dr. John Ioannidis and others;
The editors’ statement was part of a response to the Centre for Evidence-Based Medicine’s Outcome Monitoring Project (COMPARE) critique of two trials published in the Annals that did not report outcomes consistent with what had been promised/pre-specified. The Centre, based at Oxford University, has taken on the challenging and laudable project of monitoring the number of correctly reported outcomes in clinical trials published in the top five medical journals. When a problem is identified, the Centre team submits a letter to the journal for publication and hopes for a response and remediation of the problem. Outcome switching appears to be rampant. Based on the 67 articles published in the top five journals in October and November 2015, all but nine had changed their outcomes without telling their readers.
It’s not that outcomes can never be switched after a protocol begins, most clinical trialists agree, including those at COMPARE (as they make clear on their site); one simply has to be clear about why and when the outcome was changed and document the change.
Many inconsistencies appear in the letter from the Annals editors. For example, COMPARE uses standard, accepted criteria in determining whether endpoints have been pre-specified in a protocol; it’s misleading to call these criteria a “rigid evaluation” as the editors state. Nor did I see any evidence on the COMPARE web site of “labeling any discrepancies as possible evidence of research misconduct.” These editors seem to miss the point of the COMPARE project and by extension of the CONSORT guidelines, established almost 20 years ago as standards for reporting trials and that I thought journal editors agreed to adhere to. Dr. Ben Goldacre of COMPARE wrote a detailed response addressing additional inconsistencies and misunderstandings
NEJM – parasites and blindness
Separately but in a seeming complement to the Annals editors’ position on outcomes switching, the editors of the New England Journal of Medicine came out against data sharing, just as efforts to make data more available and transparent to more people have been making significant progress and the dire need for replication of studies is being increasingly recognized. The NEJM’s retrograde stance on sharing calls to mind their puzzlingly enthusiastic endorsement of tight physician-pharma relations.
What’s wrong with data sharing? The NEJM editors fret that “someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters.” It seems the wise journal editors want to shelter the scientific community and the public from the idiots who “may not understand” a dataset (or perhaps shelter their pharma advertisers?). But part of the point of science is to foster differing points of view. The editors give no further reason for their concern at this point but go on to consider questions of data combination relevant to meta-analyses–irrelevant when it comes to looking at single datasets.
Medical research has indeed become something of a swamp, and the NEJM editors warn of the emergence of “research parasites” who “use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited.” Stealing research productivity? I’d like an example or two where this has happened, as many datasets have been made public at this point.
The editors also seem blind to some larger points, such as that trying to disprove or replicate what others have done is a big part of what science is supposed to be about! No one is proposing that an investigator collect data and post it for all the world to see immediately. I would think most people would agree that time would be allowed for a primary team to publish its results. In the case of clinical trials, most immediately for drugs or tests that are marketed, I think it only fair that data be made available so that prescribing physicians have full information about what they are prescribing, and patients about what they are being prescribed. To his credit, NEJM editor Dr. Jeffrey Drazen just today published a clarification on that journal’s policy of data sharing specifically for clinical trials. He said that the journal would require authors to commit to making data available within 6 months of publication. With more eyes on data, tragedies such as those of the over-marketed arthritis drug Vioxx can very likely be truncated before the loss of 55,000 lives and tens of thousands of heart attacks and strokes.
Journal editors certainly have conflicted interests of their own, an obvious one being the huge revenues generated by pharmaceutical company ads and reprints. One is hard pressed to find evidence showing transparency and sharing in science to be a bad thing for anyone without marketing interests. But bias is hard to sort out. Hence statistics and transparency. Cave dwellers I suppose don’t appreciate the shining of light, but many of these have adapted to the point of becoming blind.
John Mandrola, MD, on Twitter, flagged a comment by Dr. David Karpf about problematic aspects of data-sharing proposals. Mandrola’s tweet generated a long stream of responses — for and against — that is worth taking a look at (click below to see the responses).
— John Mandrola, MD (@drjohnm) January 24, 2016
Dr. Molchan is a psychiatrist and nuclear medicine physician with extensive experience in clinical research at the National Institutes of Health.