In the Wall Street Journal, Gautam Naik has a thoughtful piece, “Analytical Trend Troubles Scientists,” hitting on the limitations of – and the explosion in the number of – observational studies. Excerpts:
“While the gold standard of medical research is the randomly controlled experimental study, scientists have recently rushed to pursue observational studies, which are much easier, cheaper and quicker to do. Costs for a typical controlled trial can stretch high into the millions; observational studies can be performed for tens of thousands of dollars.
In an observational study there is no human intervention. Researchers simply observe what is happening during the course of events, or they analyze previously gathered data and draw conclusions. In an experimental study, such as a drug trial, investigators prompt some sort of changeby giving a drug to half the participants, sayand then make inferences.
But observational studies, researchers say, are especially prone to methodological and statistical biases that can render the results unreliable. Their findings are much less replicable than those drawn from controlled research. Worse, few of the flawed findings are spottedor correctedin the published literature.
“You can troll the data, slicing and dicing it any way you want,” says S. Stanley Young of the U.S. National Institute of Statistical Sciences. Consequently, “a great deal of irresponsible reporting of results is going on.”
Despite such concerns among researchers, observational studies have never been more popular.
Nearly 80,000 observational studies were published in the period 1990-2000 across all scientific fields, according to an analysis performed for The Wall Street Journal by Thomson Reuters. In the following period, 2001-2011, the number of studies more than tripled to 263,557, based on a search of Thomson Reuters Web of Science, an index of 11,600 peer-reviewed journals world-wide. The analysis likely doesn’t capture every observational study in the literature, but it does indicate a pattern of growth over time.
A vast array of claims made in medicine, public health and nutrition are based on observational studies, as are those about the environment, climate change and psychology.”
The article addresses the “hot area of medical research” – the search for biomarkers.
“The presence or absence of the biomarkers in a patient’s blood, some theorized, could indicate a higher or lower risk for heart diseasethe biggest killer in the Western world.
Yet these biomarkers “are either completely worthless or there are only very small effects” in predicting heart disease, says John Ioannidis of Stanford University, who extensively analyzed two decades’ worth of biomarker research and published his findings in Circulation Research journal in March. Many of the studies, he found, were undermined by statistical biases, and many of the biomarkers showed very little predictive ability of heart disease.
His conclusion is widely upheld by other scientists: Just because two events are statistically associated in a study, it doesn’t mean that one necessarily sets off the other. What is merely suggestive can be mistaken as causal.
That partly explains why observational studies in general can be replicated only 20% of the time, versus 80% for large, well-designed randomly controlled trials, says Dr. Ioannidis. Dr. Young, meanwhile, pegs the replication rate for observational data at an even lower 5% to 10%.
Whatever the figure, it suggests that a lot more of these studies are getting published. Those papers can often trigger pointless follow-on research and affect real-world practices.”
But the story also appropriately points out the contribution obervational studies have made:
“Observational studies do have many valuable uses. They can offer early clues about what might be triggering a disease or health outcome. For example, it was data from observational trials that flagged the increased risk of heart attacks posed by the arthritis drug Vioxx. And it was observational data that helped researchers establish the link between smoking and lung cancer.”
I have written many times about the weakness of news stories that fail to point out the limitations of observational studies and – more specifically – stories that use causal language to describe the findings from observational studies that can “only” point to statistical associations.
News consumers and health care consumers need to better understand the limitations of all studies – including randomized clinical trials.