Kevin Lomangino is the managing editor of HealthNewsReview.org. He tweets as @KLomangino.
A study published recently in The BMJ addressed a question with surefire media appeal: Does the political affiliation of doctors affect the quality of care that they provide to patients at the end of their lives?
The story was snapped up by news organizations ranging from US News and World Report to the UK Daily Mail. The study was also the subject of a USA Today op-ed by BMJ co-authors Druv Khullar, MD of Cornell University and Anupam Jena, MD, PhD of Harvard Medical School.
Their conclusion was a reassuring one: “Whatever a doctor’s political views, end-of-life care is the same.”
But some experts remain uneasy about the editorial process that produced the BMJ paper. Why? Two out of the four peer reviewers who evaluated the study are close collaborators and (in the case of one reviewer) current business associates of the study authors. Those reviewers, as well as the study authors, all have ties to the health care consulting firm Precision Health Economics.
Such cozy relationships are expressly prohibited by ethical guidelines for peer review, since they may bias the review process and give the appearance of a conflict of interest. And this is not the only instance where such ties raise questions about the quality of science published in the BMJ. My review of past BMJ studies written by one of the two authors of the current study — Anupam Jena — reveals a troubling pattern: the same close colleagues and business associates have been repeatedly tapped to review his manuscripts. While offering substantive critiques of the science, those colleagues also reliably praise the quality of Jena’s scholarship and appear to recommend that his manuscripts be published with revisions.
“The study is highly innovative, important, and timely,” said Eric Sun, MD, PhD, of Harvard Medical School, in his review of the recently published political affiliations study. Sun is a long-time collaborator of Jena’s and former associate at Precision Health Economics, where Jena consults as a scientific advisor.
The research also garnered praise from Dana Goldman, PhD, of the University of Southern California — another frequent collaborator of Jena’s as well as a founding partner of Precision Health Economics. He called the study “a dispassionate examination of whether affiliation affects performance” and called the methodological approach used by Jena both “clever and appropriate.”
“This looks really bad,” said Melissa S. Anderson, PhD, a professor of higher education at the University of Minnesota who previously co-chaired the World Conference on Research Integrity. “Why would they want to risk being accused of conflict of interest when [the BMJ peer review process] is open? That amazes me. I don’t know why you’d do that.”
Anderson said that even if these reviewers provided an accurate assessment of the science, the fact that they are close collaborators of the authors casts significant doubt on their conclusions. “If it looks bad, it’s a conflict of interest,” she said. “That actually is becoming the standard for how you determine if something is a conflict of interest. Even the appearance of a conflict constitutes an actual conflict.”
Brian Nosek, PhD, Executive Director of the Center for Open Science, agreed that the situation raises red flags. He noted that peer review guidelines from the Committee on Publication Ethics pointedly forbid this type of scenario. Those guidelines state that “you should not agree to review” if you “are currently employed at the same institution as any of the authors or have been recent (e.g., within the past 3 years) mentors, mentees, close collaborators or joint grant holders.”
“There are strong norms against serving as a peer reviewer of close collaborators and of colleagues at the same institution as you,” Nosek said. “I don’t understand how this norm could have been breached multiple times by the same reviewers so straightforwardly? It suggests multiple breakdowns in the process — why did the journal/editors allow it, why did the reviewer(s) agree to review, are the same people getting asked because they are consistently recommended by the author?”
I found many other instances of conflict of interest in the peer review of Jena’s BMJ manuscripts, of which the following is a partial listing:
The academic publishing world has been grappling with problems in the peer review system for years. It’s becoming increasingly difficult to find qualified reviewers who are willing and able to devote the time needed for a careful review. Trust has also been frayed by a growing number of peer review scams that involve the authors reviewing their own papers. To be clear, the current BMJ situation doesn’t constitute the same type of “fake” review. The comments of these peer reviewers contain criticism and substantive suggestions for improvement on each manuscript. It is apparent that the reviewers have read the papers and thought carefully about the science behind them.
But in this case, such analysis is always embedded within an overall positive framing that appears to encourage acceptance of the manuscript (BMJ does not ask reviewers to recommend acceptance or rejection — leaving that decision to the editors.) Other reviewers were not as uniformly enthusiastic about these papers.
For example, comments from Carolyn Canfield, a patient reviewer and honorary lecturer at University of British Columbia, were harshly critical of the study linking physician age and mortality outcomes. In her review, Canfield cited a previous study by the same authors that correlated physician gender with patient outcomes. She said these studies can’t draw cause-and-effect conclusions, and yet they tend to generate lots of misleading news coverage:
The article’s publication generated considerable media excitement from blogging professionals and health reporters with headlines such as, “Don’t want to die before your time? Get a female doctor” –USA Today; and “Patients Cared For By Female Doctors Fare Better Than Those Treated By Men” –NPR All Things Considered; and from a Canadian psychology professor “Women physicians are superior doctors according to objective outcomes: mortality.” tweeted by @PaulMinda1 on 19 Dec 2016.
These headlines are unwarranted and unhelpful to the public. If that reception is instructive for this study, notoriety rather than credit is likely to be the true impact factor. Let’s please not blame the public for misinterpreting the evidence. I think that studies such as these are provocative without much likelihood of helping patients or physicians work towards better care.
In an interview, Jena told me that he understood why some might raise questions about his relationship to the reviewers on these papers, whom he acknowledged nominating. However, he said that critics should look at the actual reviews before concluding that there was bias in the process.
“You can’t just assert that someone has a conflict of interest,” Jena said. “You’ve got to look at the content.”
He acknowledged that he would not nominate peer reviewers whom he thought were likely to be critical of his manuscript, but said that he always suggested people who were health policy experts and qualified to evaluate his methods and data. “I hope that people would take the time to look at the reviews and analyze whether those reviews seem reasonable. There are multiple reviews on these papers and my instinct would be that there is concordance in the reviews. My guess is that someone like Dana [Goldman] writes serious reviews.”
“I think at the end of the day, what irks me is that it’s simply not scientifically fair to say that someone has a conflict of interest or is biased if they haven’t looked at the evidence.”
He also downplayed concerns that he failed to disclose his relationship to these peer reviewers to the BMJ. “If you were to ask them, ‘What do you do when someone suggests names for peer review?’ I think any editor would say that we look at those names and we know that they won’t suggest people who are likely to be critical of the manuscript. They will pick people who are content experts.”
Elizabether Loder, MD, a BMJ editor and associate professor of neurology at Harvard Medical School, said that finding qualified peer reviewers is often a problem, especially in smaller research fields. In the case of the political affiliations study, the journal asked seven reviewers to look at the manuscript and only three agreed to do so; all three were reviewers that the authors had nominated in their submission. She said that while the journal always tries to avoid selecting only author-nominated reviewers, “despite our best attempts to get a wide range of reviewers it’s doesn’t always work out that way.”
She noted, however, that the manuscript was also evaluated by eight BMJ editors and received two rounds of review from an independent statistician. “Each editor made notes with thoughts, many were heavy-hitting and quite critical,” Loder said. “The idea that favorable reviews alone can get a manuscript accepted isn’t accurate. I know from experience that at some smaller journals, reviewer comments play a very strong role in determining whether a manuscript is accepted, but that’s not true of the BMJ process.”
What about the general failure of all all parties to declare their conflict of interest in this matter?
Loder said, “We do ask people to disclose conflicts of interest, and we ask that they think very broadly about what might constitute a conflict. Having said that, it can be very challenging. Financial conflict of interest is very easy to trace, but knowing someone, especially when you’re in a small field and using techniques that no one else can give an expert review on, that’s more difficult. At what point should those things be disclosed? I think I’m genuinely confused myself on that point.”
But Jocalyn Clark, PhD, an executive editor at The Lancet, wrote in an email that the failure to disclose these relationships was a relatively black-and-white ethical lapse. “In my view, the reviewers with previous co-authorship with the Jena author (I note he is first author, so hard to miss when reviewing the paper) and advisory/consultancy/employment roles at the same company (PHE) should not have been selected as peer reviewers,” she said. “The editors may not have known the reviewers they invited were conflicted; the reviewers should have declined the invitation when they received it (reviewer invitations generally have at least the title, author list, and abstract) or even if they accepted the invitation and then had access to the full paper, should have informed the editors they needed to recuse themselves due to a relationship with the author.”
She added, “If your assessment of the reviewers having a prior collegial/co-authorship relationship with the author is true, this is very troubling that the reviewers did not declare – it is effectively them being dishonest and subverting the integrity of the peer review process.”
[Clark disclosed that she is a former editor at BMJ (2002-7) and knows many editors there.]
We only know about this situation because BMJ, unlike most academic journals, has an open and transparent peer review system that invites this type of scrutiny. Loder said the whole idea is to generate constructive criticism that leads to improvement. The journal “is sensitive to the perception that there was a conflict and we’ll aim to do better,” she said.
But what about other journals with less transparency about their peer review process? Could the same thing be happening? And could the implications of such practices elsewhere have bigger stakes for US health care policy?
One can speculate that they might. Jena and his Precision Health Economics colleagues consult for numerous clients in the pharmaceutical industry. And their academic papers and related op-eds typically advocate positions that align with those of the drug industry — for example claiming that expensive new cholesterol drugs will save the health care system up to $5 trillion over the course of 20 years, or that the value of expensive new cancer drugs exceeds their costs, or that pharma profits from expensive new drugs are in line with profits in other industries.
Is it possible that these types of papers, which influence the discussion on drug prices in the U.S., are getting similar friendly treatment during the review process by Precision Health Economics associates? Should editors at other journals who’ve published such papers go back and investigate the peer review that these papers were subjected to?
Jena said that he publishes infrequently on pharmaceutical policy, in part because he’s sensitive to the appearance of conflict that this creates. He said that he couldn’t think of any recent papers where this type of conflict could have been an issue in the peer review process.
But then again, Jena isn’t the only researcher who hasn’t called attention to his conflicts of interest during the peer review process.
“We simply wouldn’t know if this was happening at another journal [that doesn’t have an open peer review process],” said Anderson, “which is why it’s even more incumbent on [journals] to safeguard their peer review system and keep it free of conflicts of interest. The system is based on trust, and if we can’t trust [the journals] then their articles will be seen as suspect.”