Noninferiority trials have dramatically increased in number as researchers try to prove new medical devices and drugs are as safe and effective as established therapies. However, the way these studies are designed and interpreted could use a revamp, a pair of reviewers wrote in The New England Journal of Medicine.
Lauri Mauri, MD, and Ralph B. D’Agostino Sr., PhD, noted there were six times more noninferiority trials in 2015 than 2005, moving from less than 100 to nearly 600 according to a search of MEDLINE’s database.
“Although it is not statistically possible to prove that two treatments are identical, it is possible to determine that a new treatment is not worse than the control treatment by an acceptably small amount, with a given degree of confidence,” they wrote.
But differences in study design can lead to different conclusions regarding the same treatments, Mauri and D’Agostino showed by recapping a pair of studies comparing PCI to coronary artery bypass grafting (CABG) for left main coronary artery disease.
PCI was deemed noninferior in one while CABG was determined to be superior in the other. The differences: the trial in which CABG was found to be superior had a longer follow-up (five years versus three) and included revascularization as part of the primary endpoint in addition to a composite of stroke, death and MI.
“The components of the composite clinical outcome and the timing of the outcome assessment are important in interpreting the study results and explaining expected treatment results to patients,” Mauri and D’Agostino wrote.
The reviewers also cautioned against using meta-analysis to combine multiple studies that are underpowered due to small sample size.
“Heterogeneity and sources of statistical bias can make the results difficult to interpret; therefore, meta-analysis is a poor substitute for a randomized trial with an adequate sample size,” they wrote.
The Consolidated Standards of Reporting Trials group, the FDA and the European Medicines Agency each have specific standards for noninferiority trials. But Mauri and D’Agostino recommended researchers also compare the noninferiority margin with the expected benefit during study design and interpretation, avoid using composite end points that include discordant components and perform sensitivity analysis for missing data.
In addition, the authors warned against equating an underpowered superiority study with noninferiority, even if the results between two arms of a trial look similar.
“As the traditional dictum states, ‘absence of evidence does not constitute evidence of absence,’” Mauri and D’Agostino wrote.