In his new home at MedPage Today, Ivan Oransky writes, “Peer Review Cuts Down Clinical Trial Spin.” Excerpts:
Authors of about a third of the analyzed reports changed their conclusions in response to reviewer comments, mostly to tone down spin because they had “gotten a bit overexcited,” said Sally Hopewell, DPhil, of Oxford University.
One in five of the final papers included additional analyses requested by peer reviewers, and about a quarter added information about trial registration. Others clarified how subjects were randomized and allocated, or which outcomes were primary and secondary.
About one in five reviews mentioned the CONSORT Statement, “an evidence-based, minimum set of recommendations for reporting” randomized controlled trials.
Hopewell’s team did not look at what clinical effects the changes had, but they found that peer review “did lead to noticeable improvements in reporting.”
Authors of about 12% of the papers removed spin from their abstracts. Such bias has been a particular concern in breast cancer trials recently.”
Assuming scientists are human beings, it seems to me that most peer reviewers would fall into one of these categories:
2. Biased egomaniac
3. Nice person who doesn’t want to make people feel bad
4. Too busy to put any quality thought into it
5. Person with low self-esteem who doesn’t want others to succeed in his or her field
6. Coward who doesn’t want to rock the boat
I suppose some scientists have plenty of free time, no biases, and would be happy to see colleagues succeed beyond their own careers. But seriously, how many of those scientists could there be? I don’t know any non-scientists who could fit that description.
Still, I assume peer review works well enough for killing the worst ideas. I don’t have a better idea for evaluating science. It’s just important to keep things in perspective.
Follow us on Twitter: