Health News Review

In the Wall Street Journal, Gautam Naik has a thoughtful piece, “Analytical Trend Troubles Scientists,” hitting on the limitations of – and the explosion in the number of – observational studies.  Excerpts:

“While the gold standard of medical research is the randomly controlled experimental study, scientists have recently rushed to pursue observational studies, which are much easier, cheaper and quicker to do. Costs for a typical controlled trial can stretch high into the millions; observational studies can be performed for tens of thousands of dollars.

In an observational study there is no human intervention. Researchers simply observe what is happening during the course of events, or they analyze previously gathered data and draw conclusions. In an experimental study, such as a drug trial, investigators prompt some sort of change—by giving a drug to half the participants, say—and then make inferences.

But observational studies, researchers say, are especially prone to methodological and statistical biases that can render the results unreliable. Their findings are much less replicable than those drawn from controlled research. Worse, few of the flawed findings are spotted—or corrected—in the published literature.

“You can troll the data, slicing and dicing it any way you want,” says S. Stanley Young of the U.S. National Institute of Statistical Sciences. Consequently, “a great deal of irresponsible reporting of results is going on.”

Despite such concerns among researchers, observational studies have never been more popular.

Nearly 80,000 observational studies were published in the period 1990-2000 across all scientific fields, according to an analysis performed for The Wall Street Journal by Thomson Reuters. In the following period, 2001-2011, the number of studies more than tripled to 263,557, based on a search of Thomson Reuters Web of Science, an index of 11,600 peer-reviewed journals world-wide. The analysis likely doesn’t capture every observational study in the literature, but it does indicate a pattern of growth over time.

A vast array of claims made in medicine, public health and nutrition are based on observational studies, as are those about the environment, climate change and psychology.”

The article addresses the “hot area of medical research” – the search for biomarkers.

“The presence or absence of the biomarkers in a patient’s blood, some theorized, could indicate a higher or lower risk for heart disease—the biggest killer in the Western world.

Yet these biomarkers “are either completely worthless or there are only very small effects” in predicting heart disease, says John Ioannidis of Stanford University, who extensively analyzed two decades’ worth of biomarker research and published his findings in Circulation Research journal in March. Many of the studies, he found, were undermined by statistical biases, and many of the biomarkers showed very little predictive ability of heart disease.

His conclusion is widely upheld by other scientists: Just because two events are statistically associated in a study, it doesn’t mean that one necessarily sets off the other. What is merely suggestive can be mistaken as causal.

That partly explains why observational studies in general can be replicated only 20% of the time, versus 80% for large, well-designed randomly controlled trials, says Dr. Ioannidis. Dr. Young, meanwhile, pegs the replication rate for observational data at an even lower 5% to 10%.

Whatever the figure, it suggests that a lot more of these studies are getting published. Those papers can often trigger pointless follow-on research and affect real-world practices.”

But the story also appropriately points out the contribution obervational studies have made:

“Observational studies do have many valuable uses. They can offer early clues about what might be triggering a disease or health outcome. For example, it was data from observational trials that flagged the increased risk of heart attacks posed by the arthritis drug Vioxx. And it was observational data that helped researchers establish the link between smoking and lung cancer.”

I have written many times about the weakness of news stories that fail to point out the limitations of observational studies and – more specifically – stories that use causal language to describe the findings from observational studies that can “only” point to statistical associations.

News consumers and health care consumers need to better understand the limitations of all studies – including randomized clinical trials.


Greg Pawelski posted on May 3, 2012 at 11:22 am

The randomized, controlled clinical trial may likely remain the so-called standard for evidence of clinical decision-making in cancer medicine, however, observational methods and systems biology are clearly useful. Even with the importance of clinical trials, it is crucial to work on reducing their inherent limitations, including uncertain generalizations, and to expand the use of the randomized clinical trial paradigm to areas beyond proving biological activity, like diagnostic testing.

All the rigorous clinical trials identified in cancer medicine are about the best treatments for the average patient (do cancer cells like Coke or Pepsi). But cancer is not an average disease. Cancer is far more heterogeneous in response to various individual drugs than are bacterial infections. The tumors of different patients have different responses to chemotherapy. It requires individualized treatment based on testing the individual properties of each patient’s cancer.

As the number of possible treatment options supported by completed randomized clinical trials increases, the scientific literature becomes increasingly vague for guiding physicians. Almost any combination therapy is acceptable in the treatment of cancer these days. Physicians are confronted on nearly a daily basis by decisions that have not been addressed by randomized clinical trial evaluation. Their decisions are made according to experience, new basic science insights, bias or personal preference, philosophical beliefs, etc.

Whatever clinical response that has resulted to the average number of patients in a randomized trial is no indication of what will happen to an individual at any particular time. They are trying to identify the “best guess” treatment for the “average” patient. There is no accuracy, nor any proof that what works for the “average” patient population will work for the “individual.”

There are hundreds of different therapeutic drug regimens which any one or in combination can help cancer patients. The system is overloaded with drugs and underloaded with wisdom and expertise for using them. We are getting an expanding list of treatments which are partially effective in a minority of patients, ineffective in a majority, remarkably effective in a few, while being enormously expensive. The fastest way to improve things is to match treatment to the patient, not the “average” population.

Until the controlled, randomized trialist approach has delivered curative results with a high success rate, the choice of physicians to integrate promising insights and methods, remains an essential component. While I applaud the discoveries sometimes made by the clinical trial process, we forget at our peril that in cancer medicine, most of its discoveries have been observational.

Marilyn Mann posted on June 2, 2012 at 8:14 am

“For example, it was data from observational trials that flagged the increased risk of heart attacks posed by the arthritis drug Vioxx.”

I am puzzled by this statement. I thought it was a RCT, namely the VIGOR trial, which was published in the NEJM in November 2000, in which the increased risk of heart attacks from Vioxx first emerged. Does anyone know what observational data the author is referring to?