Health News Review

In his new home at MedPage Today, Ivan Oransky writes, “Peer Review Cuts Down Clinical Trial Spin.” Excerpts:

“Peer review, while hardly perfect, does improve the reporting of randomized trials, according to a preliminary study presented … at the Peer Review Congress.

Authors of about a third of the analyzed reports changed their conclusions in response to reviewer comments, mostly to tone down spin because they had “gotten a bit overexcited,” said Sally Hopewell, DPhil, of Oxford University.

One in five of the final papers included additional analyses requested by peer reviewers, and about a quarter added information about trial registration. Others clarified how subjects were randomized and allocated, or which outcomes were primary and secondary.

About one in five reviews mentioned the CONSORT Statement, “an evidence-based, minimum set of recommendations for reporting” randomized controlled trials.

Hopewell’s team did not look at what clinical effects the changes had, but they found that peer review “did lead to noticeable improvements in reporting.”

Authors of about 12% of the papers removed spin from their abstracts. Such bias has been a particular concern in breast cancer trials recently.”


On a lighter note….One time, in a post on the Retraction Watch blog, Oransky linked to the Dilbert blog regarding peer review. Excerpt:

Assuming scientists are human beings, it seems to me that most peer reviewers would fall into one of these categories:

1. Asshole
2. Biased egomaniac
3. Nice person who doesn’t want to make people feel bad
4. Too busy to put any quality thought into it
5. Person with low self-esteem who doesn’t want others to succeed in his or her field
6. Coward who doesn’t want to rock the boat

I suppose some scientists have plenty of free time, no biases, and would be happy to see colleagues succeed beyond their own careers. But seriously, how many of those scientists could there be? I don’t know any non-scientists who could fit that description.

Still, I assume peer review works well enough for killing the worst ideas. I don’t have a better idea for evaluating science. It’s just important to keep things in perspective.


Follow us on Twitter:

and on Facebook.


Greg Pawelski posted on September 9, 2013 at 4:24 pm

Peer review, in itself, lacks consistent standards. In general, a peer reviewer often spends about four hours reviewing research that may have taken months or years to complete, but the amount of time spent on a review and the expertise of the reviewer can differ greatly.

Journal Editors are the “gatekeepers” of information (only information that they allow). What’s that saying, “if peer-review were a drug, it would never be marketed.” Peer-review is nothing but a form of vetting (whether it be anger, jealousy, or whatever). Reviewers are in fact often competitors of the authors of the papers they scrutinize, raising potential conflicts of interest.

There are the major flaws in the system of peer-reviewed science. All the more reason why journalists should avoid relying on the latest studies for medical news coverage.