Report cards that compare provider outcomes may not be as accurate if the providers have small caseloads, according to a study published online March 11 in Circulation: Cardiovascular Quality and Outcomes.
Peter C. Austin, PhD, of the University of Toronto, and Matthew J. Reeves, PhD, of Michigan State University in East Lansing, performed Monte Carlo simulations to evaluate the effect of hospital volume on report card accuracy. Monte Carlo simulations, they explained, are used to ensure adequate risk adjustment to account for between-hospital differences and the uncertainty of true hospital performance.
They analyzed data from patients hospitalized with acute MI in Ontario and used this information to create the simulation parameters. They generated simulated 30-day mortality risk scores for each patient, hospital-specific random effects of 30-day mortality and 30-day mortality outcomes for each simulated subject.
The overall 30-day mortality rate was 11.1 percent. The analysis found that the volume of cases affected report card accuracy, with accuracy increasing as case volume increased and mortality rate increased. But they also found that the volume had to reach a certain level before the report cards were very accurate. Volume had to be greater than 300 before at least 70 percent of hospitals were classified correctly based on their outcomes, and volume had to be more than 1,000 before at least 80 percent were classified correctly.
Their findings, the authors explained, suggest that investigators and policymakers should interpret report card findings cautiously because of the uncertain accuracy.
“In-depth investigations can subsequently be performed at individual hospitals to identify hospital-specific explanations for either their exemplary performance or for their poor performance,” they wrote.