Kevin Lomangino is the managing editor of HealthNewsReview.org. He tweets as @KLomangino.
This post has been updated; scroll to the bottom for details.
An egregious example of pharma spin was highlighted by Dr. Vinay Prasad, an oncologist at Oregon Health Sciences University, this week on Twitter.
He pointed to a Novartis promotional website for the immunosuppressant drug everolimus (brand name Afinitor) that’s used to treat kidney and other cancers.
His annotation of a graphic on the site called attention to some startling doublespeak:
The graphic claims that there was a 6.3 month difference in overall survival between groups who did and didn’t receive the drug for the treatment of pancreatic tumors. The fine print describes the difference as “not statistically significant but clinically meaningful.”
In the conference abstract where these results were presented, the study authors reported the findings as “not statistically significant” – full stop. Nothing about “clinically meaningful” improvement.
In academic communications, when the difference between a treatment and control group is not statistically significant, it’s customary for researchers to conclude that there was “no difference” between the groups.
When results aren’t statistically significant, researchers can’t be sufficiently confident that any benefit they observed is real. Such findings are considered speculative until confirmed by other studies.
Sometimes, a result that was initially “not significant” might well reach the threshold of significance in a bigger study group with more patients, which is what this promotional material seems to anticipate.
But that’s a massive leap of logic, because nobody knows what would happen in larger study or whether the benefits seen in a smaller group of patients would hold up.
Despite that uncertainty, the Novartis promotional material treats this negative finding as evidence of benefit and hypes it as “clinically meaningful” to doctors visiting its website.
Dr. Susan Molchan, a HealthNewsReview.org contributor who has extensive experience in clinical research at the National Institutes of Health, called the advertising “blatantly misleading.”
“People, doctors included are more easily taken in because it’s a medical product and has FDA approval, which unfortunately is becoming less and less meaningful, as Dr. Prasad and others have pointed out with the increasingly lenient use of surrogate endpoints that haven’t been well studied enough to show that they really predict anything that will really be helpful to a patient,” she said.
Expensive and toxic cancer drugs are often approved and used despite the fact that they don’t work very well.
Everolimus specifically has been approved by the FDA for treatment of a wide range of aggressive cancers even though its side effects are very serious and there’s no proof that it extends life – as reported in this excellent Milwaukee Journal-Sentinel/Medpage Today investigation.
Research has repeatedly shown that doctors are likely to overestimate the benefits and underestimate the harms of many drugs and procedures.
While misleading pharmaceutical spin isn’t the only reason for this, it’s clearly an important contributor to the problem.
Dr. Susan Wei, a biostatistician with the University of Minnesota and HealthNewsReview.org contributor, shared some thoughts on the issue of statistical significance and how it relates to “clinically meaningful” results. She told me that while it’s desirable to have “high power” in clinical studies (achieved through larger sample size/more patients), there is actually such a thing as too much power in a study:
“Specifically, studies with excessive power may detect statistically significant results that are not clinically meaningful. Researchers and practitioners would do well to keep this in mind: statistically significant does not mean clinically meaningful.”
With respect to the current study and its portrayal by Novartis, however, she wrote:
“A result that is statistically insignificant is not meaningful, period. Thus, we cannot say a result is statistically insignificant and clinically meaningful at the same time. The only hope is that perhaps the current study was inadequately powered so that it could not detect a statistically significant effect, which can be remedied in future studies. But since we are beholden to current evidence, we cannot make any meaningful statements from a statistically insignificant result.”