Researchers from the University of California, Los Angeles (UCLA) have developed a new algorithm that better predicts pre- and post-heart transplant survival than existing methods.
Dubbed “Trees of Predictors” (ToPs), the algorithm combines machine learning and 53 data points—including age, gender, body mass index, blood type and blood chemistry—to assess the differences among people waiting for heart transplants and their compatibility with donor organs.
Because there are two people in this equation—the donor and the recipient—it has been difficult for existing models to accurately predict survival, wrote lead author Jinsung Yoon and colleagues in PLOS One.
But this new model incorporates information about the potential recipients (33 data points), the donor (14 data points) and the compatibility between the two individuals (six data points) to help address this issue. Yoon et al. noted their model predicted survival three-month post-transplant with an area under the curve (AUC) of 0.660; the best clinical risk scoring method has an AUC of 0.587, they said.
The algorithm outperformed a conventional benchmark by 14 percent when predicting mortality at three years post-transplantation—accurately identifying 2,442 more survivors out of 17,441 total survivors. It also predicted 13 percent more deaths among patients who actually died.
“Our work suggests that more lives could be saved with the application of this new machine-learning–based algorithm,” senior study author Mihaela van der Schaar, a professor at both UCLA and University of Oxford, said in a press release. “It would be especially useful for determining which patients need heart transplants most urgently and which patients are good candidates for bridge therapies such as implanted mechanical-assist devices.”
The researchers tested their prediction model on 51,971 transplant patients and 30,911 wait-listed patients who were registered in the United Network for Organ Sharing (UNOS) database from 1985 to 2015.
The top three clinical risk-scoring methods combined have identified the most relevant features of the prediction model, the authors noted. But the machine learning aspect of ToPs sets it apart, they said, allowing it to “discover” the most relevant features of a dataset as it incorporates new information. ToPs also identifies subpopulations within the dataset in which certain factors are particularly relevant, improving the personalization of its predictions.
“Our predictive model can be easily and automatically re-trained as clinical practice changes and new data becomes available,” Yoon and colleagues wrote. “The clinical and public health implications of our findings are broad and include improved personalization of clinical assessments, optimization of decision making to allocate limited life-saving resources and potential for healthcare cost reduction across a range of clinical problems.”
As with all machine learning models, the tool can only be as good as the data that is inputted, the authors acknowledged. Missing source data or low-quality data could affect the model’s ability to predict outcomes.