Science & chronic fatigue syndrome: criticism of journal reviewing

Posted By

Categories

Tags

There’s a very important “Observations” article in this week’s BMJ (subscription req’d for full access). University of Edinburgh scientist Cathie Sudlow writes about a paper in Science that reported an association between a retrovirus and chronic fatigue syndrome. Excerpts:

“… after reading and rereading the article and its online supplement I could find little description of the epidemiological methods. Where were the details of the characteristics and selection procedures for the cases and controls or of blinding of researchers to the case-control status of the samples? Where was the discussion of the potential roles of bias and confounding?

Concerned about the lack of basic methodological information in the Science report, some colleagues and I sent an e-letter response, a week after the initial online publication of the paper. E-letters are described on Science’s website as “online-only, 400-word contributions for rapid, timely discussion.” But no e-letter responses to the paper appeared–and indeed still have not. In early January this year, nearly three months later, a Science editor emailed to explain that they had received a number of responses to the paper and had decided to consider our submission and others as refereed “technical comments,” which appear in Science Online with a response from the authors of the commented paper. By early February we had received referees’ largely enthusiastic comments and an invitation to resubmit an appropriately revised version of our comment, which will hopefully be published soon.

In the meantime three published studies that used samples from the United Kingdom and the Netherlands have found no association between (the retrovirus) and chronic fatigue syndrome.

I am pleased that our concerns about potential shortcomings in the methods of the original Science report of this association are likely to be formally published. But the story has led me to reflect on how the whole process of scientific discovery is communicated. What can be done to avoid the false hope that follows an initial report of a breakthrough, which will often later be shown to be a false positive finding or an overestimate of the truth? (my emphasis added) Does a system of genuinely rapid responses to journal articles help, or is a slower and more selective approach such as that taken by Science a better way of reflecting the subsequent scientific debate? I am uncertain, although a forum for such debate must surely be offered by scientific and medical journals.

More importantly, though, I believe that scientists, journal editors, and reviewers must all take more responsibility for ensuring that the published scientific record contains all the information needed to judge the accuracy and reliability of the reported results. Scientists and journals must be careful that the excitement of novel findings and the draw of the impact factor do not lead to important details being overlooked.”

You might also like

Comments (6)

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.

Anonymous

March 5, 2010 at 6:15 pm

See the great “Why Most Published Research Findings Are False” in PLoSOne.
1. A paper gets press for being exciting.
2. The most likely reason something is exciting is that it overturns existing scientific consensus.
3. Since scientific consensus is usually based on something, an ‘exciting paper’ usually should be read as creating controversy, not proving a fact.
We’d be better-served by giving publicity to research of high methodologic quality which are more likely to be accurate and allow ‘shocking’ data to be digested and critiqued (ie, perform follow-up research) before blasting it on the airwaves. Obviouslyo blame the journalists here, but more blame should probably go to the major journals. Since a ‘too good to be true’ study probably is, Science and (more importantly) the clinical journals that change practice (NEJM, JAMA) should be more accurate and less exciting.

Joseph Arpaia, MD

March 6, 2010 at 7:33 am

I recall the late Nobel Prize Winner in physics, Richard Feynman, stating that an essential component of scientific research is replication of the results, before rushing to publish. His attitude seemed to be that this was largely ignored in the medical, psychological, and social sciences.
If scientists and scientific journals were going to be rigorous, they would not publish a study that had not been replicated. However, that would mean few if any studies would get published.
This would be beneficial for the public and the scientific community, but disasterous for academics who need a well padded CV to advance in their careers.
–Joe

Hmm?

March 6, 2010 at 2:44 pm

If the role of NCI & the Cleveland Clinic in the Science paper was not to replicate the original findings, then what exactly why were they both on there?
Given that nothing published since the Science paper studied the same cohort, I question why Science is being knocked and nobody questions why the BMJ would publish a study using the cohort they did.
Perhaps ‘critical thinking’ doesn’t extend to understanding the differences between Oxford, Fukuda, and Canadian criteria.