Health News Review

My enthusiasm for the work of Dr. John Ioannidis of Stanford is shared by Harold DeMonaco, one of our story reviewers on HealthNewsReview.org.  DeMonaco submitted the following guest blog post, which I’m pleased to publish:

———————————————

The headline of this post is a quote from a recent interview that Gary pointed to in a recent blog.  If you are a journalist, Dr. Ioannidis’ interview is definitely worth reading.  His quote is one that should be on your mind when you sit down to write a story about a recently published clinical study or press release from a manufacturer.

Critically evaluating the medical literature is not something that comes naturally.  Having been at it for over forty years, I can attest that it is not easy to get it right.  I frequently am asked to review submitted manuscripts for several medical journals.  Having just completed a manuscript review, I think that it might be instructive to tell you what goes into the process.

I usually get an email from the editors of the journal asking if I would be willing to review a submitted manuscript.  The editors usually choose a couple of reviewers they think can do the manuscript justice.  And they give us three weeks in which to complete the process.  And no, there is no remuneration other than a once a year acknowledgement in the journal of the reviewers.  In those three weeks, you are expected to read and evaluate the submission, evaluate the subject population and methods, determine if the conclusions are supported by the data and finally to determine its value to the readers of the journal and make recommendations on changes.  There are times when I think that I have spent more time in reviewing the manuscript than the authors did in writing it.

I should point out that no one gets any time way from their “day job” to conduct the review process.  The review is conducted in what otherwise would be personal time.  It is simply expected of you as a member of an academic center community.

I refer back to an article by Trisha Greenhalgh published in the British Medical Journal from time to time when I am reviewing a manuscript.  Although it was written as a guide for practicing physicians, it offers some good advice for journalists.  Here is the summary:

  • The first essential question to ask about the methods section of a published paper is: was the study original?
  • The second is: whom is the study about?
  • Thirdly, was the design of the study sensible?
  • Fourthly, was systematic bias avoided or minimised?
  • Finally, was the study large enough, and continued for long enough, to make the results credible?

—————————————

Publisher’s Note: Ioannidis appeared on a long-format radio talk show on Southern California Public Radio last week along with Ivan Oransky of Reuters Health.  You can listen to the broadcast here.

Comments

Greg Pawelski posted on November 1, 2012 at 12:58 pm

Harold

As a reviewer of published clinical studies, I was wondering if you would comment on this.

Dr. Gabor D. Kelen stated in ResearchGate, a professional network for scientists and researchers, hypothesis testing is based on certain statistical and mathematical principles that allow investigators to evaluate data by making decisions based on the probability or implausibility of observing the results obtained.

However, classic hypothesis testing has its limitations, and probabilities mathematically calculated are inextricably linked to sample size. Furthermore, the meaning of the p value frequently is misconstrued as indicating that the findings are also of clinical significance.

Finally, hypothesis testing allows for four possible outcomes, two of which are errors that can lead to erroneous adoption of certain hypotheses:

1. The null hypothesis is rejected when, in fact, it is false.
2. The null hypothesis is rejected when, in fact, it is true (type I or alpha error).
3. The null hypothesis is conceded when, in fact, it is true.
4. The null hypothesis is conceded when, in fact, it is false (type II or beta error).

Type I error occurs when you cannot reject the null hypothesis and type II error occurs when you reject it inappropriately. The other outcome would be consistent with what you might look upon as true positives and true negative.

The sample size error is extremely important for it goes to the next point of all these discussions, that is, when does statistical significance occur and not be relevant and when does statistical significance not occur and yet the actual finding proves to be of great relevance. Sample size dictates that.

Harold DeMonaco posted on November 1, 2012 at 5:50 pm

Excellent points. I would be the first to admit that I am no expert in statistics. The sentence that resonates with me personally is, “Furthermore, the meaning of the p value frequently is misconstrued as indicating that the findings are also of clinical significance.”

Statistical significance is just that, statistical significance. Whether the difference implied is of importance to the patient is a separate but very important issue. For example, a statistically significant difference in cognitive function seen with a new treatment for Alzheimers may be based simply on a very sensitive and sophisticated instrument. The average patient and family member may not detect the subtle difference seen in the instrument. Another example, and the reason that I do not review oncology studies, is the statistically significant difference seen in survival measured in weeks or days associated with new treatment regimens. My friends who practice oncology can be quite enthusiastic about these “improvements in survival” or “disease free intervals.” Just how important a difference of three weeks in duration of remission? Scientific interest is, in my view, can be a bit removed from pragmatic interest. Personally I would define these improvements in cost per QALY’s (quality adjusted life years) but I am not an editor of a medical journal.

Many thanks for your comments

Michael Mirochna, MD posted on November 5, 2012 at 6:38 pm

clinical significance is a dinosaur, it just doesn’t seem to exist in the literature. It was thrown out when we started relying on surrogate markers.

Greg Pawelski posted on November 6, 2012 at 12:40 am

Surrogate markers have the potential to be very helpful in oncology drug development by allowing us to more quickly identify active agents. The challenge is that validating a surrogate marker is difficult. The criteria for validation require that a treatment effect seen on the surrogate predicts the treatment effect on the true endpoint, and the only way to determine that is by examining a series of previously conducted trials. There is no way to validate a new surrogate endpoint in a new trial because you don’t have the true endpoint until you have observed the true endpoint, so to speak. The validation process requires the analysis of a large number of previously conducted trials through a meta-analysis. That requires cooperation among many research groups who are willing to provide the data. There are many examples of endpoints that are thought to be useful, but, in the end, don’t show a strong association with a true endpoint. Because of these false-positives, we really do have to be cautious in our use of surrogate endpoints.