Researchers have developed a point-of-care smartphone app that helps physicians ID cardiac implanted electrical devices (CIEDs) in urgent or emergent settings, according to a study published in JACC: Clinical Electrophysiology.
Kevin J. Ferrick, MD, of Montefiore Medical Center, and colleagues workshopped the AI approach as an alternative to standard CIED identification in hospitals. Right now, CIED programming and interrogation require unique equipment and proprietary software that, while certainly useful, might not be accessible in an emergency setting.
“Device identification currently relies on manual inspection of chest radiographs,” Ferrick and co-authors wrote. “However, this is time-consuming and difficult, requiring subjective provider interpretation.”
Ferrick’s team saw a more objective solution in artificial intelligence, using anteroposterior and posteroanterior chest radiographs from patients with pacemakers or defibrillators to train a neural network to quickly identify a CIED. The researchers pulled images dated between 2016 and 2018 from the Albert Einstein College of Medicine’s EHR, de-identifying and coding each device as either Medtronic, Abbott/St. Jude Medical, Boston Scientific or Biotronik.
Ferrick et al. cropped the raw x-rays to 400-by-400-pixel single-channel RGB files and performed data augmentation using a host of different variables. The team then captured those images on a mobile phone, allowing the AI to incorporate screenshots and artifact variations into its model. Ambient lighting conditions were altered throughout the phone capture process.
A total of 1,509 radiographs were included in the analysis, 47% of which were from patients with pacemakers and 53% of which were from patients with implantable cardioverter-defibrillators. Just over 3,000 images were loaded for analysis after smartphone camera capture, and randomizing them according to a 7:2:1 ratio for training:validation:testing meant 2,106 images were used in the team’s training dataset, 602 were used in the validation dataset and 300 were used to finalize the model.
In the final test group, Ferrick and colleagues reported the mobile phone app they developed correctly classified 95% of Boston Scientific images, 91% of Biotronik images, 94% of Medtronic images and 100% of Abbott/St. Jude Medical images.
“These results yielded receiver-operating characteristic curves with excellent areas under the curve,” the authors wrote. “When the validation dataset was examined, the accuracy was 97%, and the loss was minimal at 0.11.”
According to the study, the finalized model achieved 95% sensitivity and 98% specificity.
Ferrick et al. said that since their training sample size was somewhat small and limited to a single institution, it would be valuable to validate the model externally. Still, their neural network was able to accurately identify CIEDs on chest radiographs and translate that ability into a phone app.
“Rather than the conventional ‘bench-to-bedside’ approach of translational research, we demonstrated the feasibility of ‘big data-to-bedside’ endeavors,” the team said. “This research has the potential to facilitate device identification in urgent scenarios in medical settings with limited resources.”