While hospitals are constantly ranked on performance, how accurate are these assessments? Based on research published in the May issue of Circulation: Cardiovascular Quality and Outcomes that found these methods may be imprecise, rankings should not be used for comparisons of risk-adjusted cardiac mortality rates, the authors wrote.
“Football leagues, college and university rankings, the Thomson Reuters league tables in business: In all types of branches, teams, institutions, or companies are ranked based on their performance,” wrote Sabrina Siregar, MD, of the University Medical Center Utrecht in the Netherlands, and colleagues. “Such lists, however, could have major consequences in cardiac surgery and the rest of healthcare.”
To better understand the precision of these types of ranking lists in cardiac surgery, Siregar et al looked at data on cardiac surgery patients at 16 cardiothoracic centers in the Netherlands between Jan. 1, 2007, and Dec. 31, 2009. Data were taken from the Netherlands Association for Cardio-Thoracic Surgery database and included 46,883 surgical procedures. The ranks were assessed using mortality rates.
The authors reported mean mortality rates to be 3 percent over the three-year span. The mean EuroSCORE reported was 7 percent. Hospital volume also fluctuated with volumes ranging from 500 to 2,000 patients per year and roughly 1,600 to 5,700 for the three years combined. However, despite this variation, the authors speculated that hospital volume had no effect on mortality.
Siregar et al said that “the highest and lowest ranked hospitals are consistently ranked in high and low positions, respectively, despite random variability (due to chance); however, most hospitals are in the middle part of the ranking lists, where the flat and wide distribution curves indicate that the hospital ranks are likely to fluctuate due to chance."
The authors said that this variability must be taken into account in these ranking systems; otherwise they will not reflect the true between-hospital variability. “This study showed that ranking statistics were very imprecise,” the authors added. “Ranks were likely to fluctuate merely due to chance and were thus instable.”
They also noted that a hospital's rank may change without any change in the underlying mortality rate when another hospital's rank shifts. “High and low ranks do not necessarily imply absolute high or low performance,” they wrote.
The authors made the following conclusions:
- The interpretation of ranking lists requires knowledge about variability due to chance in order to discern systematic differences from random variation; and
- Chance variability is larger in ranking statistics than in mortality rates, because ranks represent a relative scale and are correlated to each other.
What are some alternatives to these ranking lists? The authors said comparing each hospital against one value and using “expected ranks” based on the probability that a hospital performs worse than any other hospital are two options.
“In conclusion, rankings are an imprecise statistical method to report cardiac surgery mortality rates,” the authors summed. “We strongly discourage the use of ranking lists for the purpose of comparison of risk-adjusted cardiac surgery mortality rates.”