How marketing, not evidence, often drives clinical trial research

Blogger Alison Bass jumps on a Journal of Bioethical Inquiry article that says that “while evidence-based medicine is a noble ideal, marketing-based medicine is the current reality.”

Bass consistently tracks medicine’s conflict of interest issues. Her blog would be a good bookmark for you if you care about these issues. And her book, “Side Effects: A Prosecutor, a Whistleblower, and A Bestselling Antidepressant on Trial,” is terrific.

You might also like


Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.

Gregory D. Pawelski

January 30, 2010 at 7:35 am

I did a paper on taxol and one of the researchers listed in the foot notes of my paper had told me that the study he finally published in the journal Oncology, was rejected by all other American & Europen cancer journals (Journal of Clinical Oncology, Cancer, Annals of Oncology, European Journal of Cancer, International Journal of Cancer) where it had been submitted. The journals were reluctant to publish such a scientific report, simply because taxanes (both taxol and taxotere) were at the time very intensively advertized in these journals.
Less than 20 percent of registered clinical trials of cancer drugs are eventually published in medical journals, according to a review published online by the The Oncologist medical journal.
A search of the National Institutes of Health’s web site identified 2,028 registered research studies of cancer treatments. Major medical journals require all studies considered for publication be registered at or another publicly accessible database.
And a subsequent search of the National Library of Medicine’s PubMed database showed that just 17.6 percent of the trials were eventually published in peer-reviewed medical journals.
The publication rate was particularly low for industry-sponsored studies, such as those funded by drugmakers (just 5.9% compared to 59% for studies sponsored by collaborative research networks. Of published studies, nearly two-thirds had positive results in that the treatment worked as hoped. The remaining one-third had negative results like the outcome was disappointing or did not merit further consideration of the tested treatment, they report.
The finding raises concern about publication bias in cancer treatment trials, according to the researchers, Scott Ramsey and John Scoggins of University of Washington and Fred Hutchinson Cancer Research Center in Seattle.
The researchers suspect the rate of negative results is much higher in the studies that have gone unpublished. “It is likely that many unpublished studies contain important information that could influence future research and present practice policy,” they wrote.
Of course, we know why a registered trial may not be published, some fail and a researcher may decide the result doesn’t enhance knowledge or one’s reputation. And some sponsors don’t want negative results out there. Same goes for some journal editors.
But “unpublished trials may have special importance in oncology, due to the toxicity and/or expense of many therapies,” they wrote. In other words, the knowledge base is incomplete. And who does that help?

Nicholas Fogelson, MD

January 31, 2010 at 4:46 pm

Publication bias is a huge problem in the medical literature, for so many reasons. Sometimes its because of advertisers, but more often it is because of the origins of an article. An article published by a major researcher with 100+ publications is pretty much guaranteed publication with little editing, while young researchers are subjected to incredible scrutiny and rejection. International papers also suffer from much higher rejection rates. Some would argue that this is because of the quality of papers, but we don’t really know.
But we do know is how to find out. All one would have to do is eliminate all references to author and study location in the review materials for a period of, say, 1 year. Then go back and look at what got accepted for publication. If publication is truly fair, we should see a similar distribution of authors in that years publications. But if we start seeing lots of new authors that we never have seen before, from new centers, and from new countries, that would be a different story. It would be de facto evidence of publication bias, and suggest that a change needs to be made.