There’s a very important “Observations” article in this week’s BMJ (subscription req’d for full access). University of Edinburgh scientist Cathie Sudlow writes about a paper in Science that reported an association between a retrovirus and chronic fatigue syndrome. Excerpts:
“… after reading and rereading the article and its online supplement I could find little description of the epidemiological methods. Where were the details of the characteristics and selection procedures for the cases and controls or of blinding of researchers to the case-control status of the samples? Where was the discussion of the potential roles of bias and confounding?
Concerned about the lack of basic methodological information in the Science report, some colleagues and I sent an e-letter response, a week after the initial online publication of the paper. E-letters are described on Science’s website as “online-only, 400-word contributions for rapid, timely discussion.” But no e-letter responses to the paper appeared–and indeed still have not. In early January this year, nearly three months later, a Science editor emailed to explain that they had received a number of responses to the paper and had decided to consider our submission and others as refereed “technical comments,” which appear in Science Online with a response from the authors of the commented paper. By early February we had received referees’ largely enthusiastic comments and an invitation to resubmit an appropriately revised version of our comment, which will hopefully be published soon.
In the meantime three published studies that used samples from the United Kingdom and the Netherlands have found no association between (the retrovirus) and chronic fatigue syndrome.
I am pleased that our concerns about potential shortcomings in the methods of the original Science report of this association are likely to be formally published. But the story has led me to reflect on how the whole process of scientific discovery is communicated. What can be done to avoid the false hope that follows an initial report of a breakthrough, which will often later be shown to be a false positive finding or an overestimate of the truth? (my emphasis added) Does a system of genuinely rapid responses to journal articles help, or is a slower and more selective approach such as that taken by Science a better way of reflecting the subsequent scientific debate? I am uncertain, although a forum for such debate must surely be offered by scientific and medical journals.
More importantly, though, I believe that scientists, journal editors, and reviewers must all take more responsibility for ensuring that the published scientific record contains all the information needed to judge the accuracy and reliability of the reported results. Scientists and journals must be careful that the excitement of novel findings and the draw of the impact factor do not lead to important details being overlooked.”