Health News Review

Recently, I wrote about an article published by PLoS One that pointed out the potential for flawed reporting on the results of published clinical trials. Now, Harold DeMonaco, one of our story reviewers and a frequent guest blogger on our site, writes about an article and accompanying editorial in the September 20 edition of the New England Journal of Medicine that he says every reporter should look at.  Here is his guest column:

————————————————————————-

The article by Kesselheim and colleagues  provides the results of a randomized trial of physician interpretation.  The authors examined the impact of both methodology and funding sources in physician’s likelihood to prescribe two fictitious medications.  One set of studies was said to be funded by the pharmaceutical manufacturer and the other by the National Institutes of Health.  The authors concluded, “Physicians discriminate among trials of varying degrees of rigor, but industry sponsorship negatively influences their perception of methodologic quality and reduces their willingness to believe and act on trial findings, independently of the trial’s quality. These effects may influence the translation of clinical research into practice.”  The good news appears to be that physicians do take the funding source of clinical trial results into consideration.

In an accompanying editorial entitled, “Believe the Data”, Dr. Jeffrey Drazen takes a somewhat divergent and provocative view on the issue of industry funding.   He writes, “Is this lack of trust justified? The argument in favor of its justification — that is, the pharmaceutical industry has a financial stake in the outcome, whereas the NIH does not — supports the conclusion that reports from industry-sponsored studies are less believable than reports from NIH-sponsored ones. This reasoning has been reinforced by substantial press coverage of a few examples of industry misuse of publications, involving misrepresentation of the design or findings of clinical trials.   However, investigators in NIH-sponsored studies also have substantial incentives, including academic promotion and recognition, to try to ensure that their studies change practice.”   He goes on further and notes, “A trial’s validity should ride on the study design, the quality of data-accrual and analytic processes, and the fairness of results reporting. Ideally, these factors — not the funding source — should be the criteria for deciding the clinical utility.” In essence, let’s not forget about other types of conflicts of interest, and let’s not throw the baby out with the bathwater.  This from the editor in chief of one of the most notable medical journals published

It is clear that not everyone agrees with Dr. Drazen.  In a Scientific American guest blog, Dr. Jalees Rehman writes:

“The editor (Drazen) suggests that the funding source should not factor into the evaluation of the clinical relevance and significance of studies. This view from the editor of one of the leading medical journals comes as somewhat of a surprise, because it implies that one should ignore the possibility of hidden biases. Why then, do reputable medical journals such as the New England Journal of Medicine publish details about financial disclosures and conflicts of interest for all their articles, if the readers of the articles are supposed to ignore the funding source when evaluating the articles?

Combining a rigorous analysis of methods of clinical studies with a careful evaluation of potential financial biases is probably the most appropriate way to assess clinical research studies.”

These are clearly two divergent views on the topic.  Both have their proponents and those in opposition.  Both highlight the need for transparency in reporting on the potential for conflicts of interest in reporting on clinical trial results.

 

—————————

Publisher’s addendum on September 26:

I’m now aware of at least two noteworthy reactions to the NEJM article and to editor Drazen’s editorial:

Comments

Greg Pawelski posted on September 24, 2012 at 12:14 pm

It’s a shame about the clinical trial paradigm. Only eight percent of new drugs entering Phase I trials ever make it to marketing and this percentage is even lower for cancer drugs, because current drug testing is inefficient, with many drugs failing late in development, with these expensive failures owing, in large measure, to ineffective drugs and poor patient selection. Little progress has been made in identifying which therapeutic strategies are likely to be effective for “individual” patients, not average populations.

Were academic clinical investigators incentivized to achieve greater clinical successes, there would be fewer failed Phase II and Phase III trials. Contrary to the business world where success is rewarded, academic clinicians receive the same compensation for every patient treated, whether the intervention is successful or not. Hence the incentive of academic promotion and recognition. This has the unintended consequence of encouraging physicians to accrue patients to clinical trials with no focus on effective therapies.

While it may be gratifying to the trialists to have successes, they receive the same compensation for their failures. Academic clinical investigators need skin in the game. The one-size-fits-all paradigm is crumbling as individual patients with unique biological features confront the results of the blunt instrument of randomized clinical trials.

Harold DeMonaco posted on September 24, 2012 at 6:36 pm

Interesting comments. I agree that the current paradigm of clinical trials is flawed and in need of updating. However, it is unclear to me how improving clinician investigator incentives will improve the situation. My take is that our understanding of disease is seriously flawed at the moment and that lack of understanding is what is behind the failure of most drugs in clinical trials to move forward. I agree that the “one size fits all” approach is likely to produce unnecessary trial failures. But a better understanding of disease as a complex and multifactorial process is perhaps what is necessary. Incentivizing investigators beyond the present approach is not likely to produce results. One could argue that “skin in the game” will produce false positive results.