“New cancer drug works as well as existing therapy, with fewer side effects.”
If that headline sounds too good to be true, it just might be.
A news release or story that proclaims a new treatment is “just as effective” or “comparable to” or “as good as” an existing therapy might spring from a non-inferiority trial.
Technically speaking, these studies are designed to test whether an intervention is “not acceptably worse” in terms of its effectiveness than what’s currently used.
However, non-inferiority trials should raise a red flag for consumers.
These trials have proliferated as drug and device makers find it harder to improve upon existing treatments. So instead, they devise products they hope work just as well but with an extra benefit, such as more convenient dosing, lower cost, or fewer side effects.
If a company can show its product is just as effective as the current standard treatment but with an added perk, it might gain a marketing edge. Problem is, the studies used to generate that edge often aren’t considered trustworthy.
Generally speaking, non-inferiority trials are considered less credible than a more common trial design, the superiority trial, which determines whether one treatment outperforms another treatment or a placebo. That’s because non-inferiority trials are often based on murky assumptions that could favor the new product being tested.
That scarcity of negative findings “raises the provocative questions of whether industry-sponsored non-inferiority trials offer any value—aside from capturing market share,” wrote Vinay Prasad, MD, in an editorial in the Journal of Internal Medicine entitled “Non-Interiority Trials in Medicine: Practice Changing or a Self-Fulfilling Prophecy?”
In a separate concern, ethical issues have been raised about whether some non-inferiority trials should be conducted at all, because they might expose patients to potentially worse treatments in order to advance a commercial goal.
Evaluating a non-inferiority trial can be difficult, but here are some questions journalists and consumers can ask.
Three questions to ask
1. Is there solid evidence that the comparison treatment works?
A non-inferiority trial might use a comparison treatment that isn’t effective.
For example, HealthNewsReview.org reviewed a CBS story that reported a trial comparing yoga with physical therapy for back pain.
The story proclaimed that yoga was found to be “as good as” physical therapy.
However, as our reviewers noted, physical therapy hasn’t been shown to be very good at treating back pain. They cited an editorial stating that when it comes to chronic back pain, “any single treatment approach is unlikely to prove helpful to all or even most patients.”
The story gave a misleading impression about the effectiveness of yoga for back pain by failing to mention the weakness of the comparison treatment — physical therapy. It went further astray with a headline that proclaimed the “promise of yoga” for treating back pain.
2. Is the comparison treatment the current standard of care? And is it effective in this trial?
What’s proven to work in the past might not be the current standard of care. Even if the comparison treatment is the current standard of care, the trial might be set up so that the comparison treatment does not perform effectively. For example, there may be a change in dosage, patient population, or measured outcome from a previous trial in which the comparison treatment was proven effective.
3. What is the margin for “non-inferiority?”
The margin for non-inferiority might be big, making it possible for a treatment that’s significantly less effective to be declared non-inferior.
For example, if the study allows for a 7% rate of deaths for a new drug versus 5% for the one it’s being compared to, the new drug will be considered non-inferior even if an additional 20 patients die out of 1,000 who are treated.
After one year, the rate of serious events — cardiac deaths, heart attacks, and surgeries to restore blood supply — was 7.8% in patients who had the Absorb stent versus 6.1% in those with a standard metallic stent.An Abbott news release reported “positive” results for its Absorb stent, which was designed to produce fewer long-term complications than an existing metallic stent by dissolving a few years after it was implanted in an artery.
That was within the study’s margin for non-inferiority, and in a news release an Abbott official declared Absorb “comparable to the best-in-class metallic stent.”
The problem of ‘biocreep’
But some researchers noted that significantly more patients would experience a serious adverse event with the new stent. Forbes pointed to an editorial by independent researcher Robert Byrne that cautioned doctors about embracing the technology: “Most clinicians in everyday practice would not accept this degree of difference between two stents in their catheterization laboratories. This means that the clinical relevance of the finding of statistical non-inferiority is open to question.”
Successive non-inferiority trials with large margins can lead to a problem called “biocreep,” in which a drug that’s no better than a placebo is proclaimed “non-inferior” to one that works well. Here’s how it’s explained by the Statistics Collaborative, a biostatistical consulting firm:
Suppose an earlier trial found drug A to be clearly better than placebo, then several years later, drug B is found non-inferior to drug A in a trial with a large non-inferiority margin. Drug C is then compared to drug B, again with a large non-inferiority margin, and shown to be non-inferior to B. This is an example of biocreep; at each step, the new drug has been shown to be not unacceptably worse than the previous. Hence, a comparison of a new drug with drug C may not be fair, because drug C may in fact be less effective than drug A and, if the margins were too big, even less effective than placebo.
Weighing the trade-offs
Patients and physicians must consider the trade-offs between potential benefits and possibly slightly worse performance, as well as uncertainties that come with the adoption of any new treatment. This is a judgment call.
Italian neurologist Stefano Ricci, MD, put it this way:
Would you buy a car which is definitely less good in terms of safety and durability than the model you had set out to buy, just because the first vehicle is a bit less expensive? The answer to this question obviously depends on the degree of both these differences. If the safety is just 0.05% inferior and the cost is 20% less, I – and, I expect, most of you – would probably say ‘Yes OK,’ but if the percentages were inverted we all would say ‘No thanks’.
… Thus, when reading papers or protocols based on non-inferiority, the right question we have to ask is ‘How much worse is it?’ This should be immediately followed by another question: ‘Are my patients keen to be offered a less effective treatment if it carries a different, clear-cut advantage?’ If the answer to the first question is a very low figure, and the answer to the second question is definitely yes, than I would recommend this new, non-inferior (or rather, ‘just a little bit worse’) treatment to my patients.
Remember that dissolving stent?
Many doctors decided its promise of fewer complications years down the road wasn’t worth the gamble of more frequent serious adverse events in the short term, as well as higher cost and longer insertion time. Ultimately, it was pulled off the market.
- JAMA users guide to non-inferiority trials
- Through the looking glass: understanding non-inferiority
- What Does ‘Non-Inferior to’ Really Mean?