The promise of artificial intelligence (AI) to revolutionize healthcare is the topic of increasing research, with new publications every day devoted to the topic. One of these applications, according to an April 1 article in The Wall Street Journal, is using AI to listen to a person’s voice and detect a range of mental and physical ailments, including coronary artery disease (CAD).
The Mayo Clinic conducted a two-year study through February 2017 which found that a machine learning algorithm could identify and then detect specific voice biomarkers of CAD. This connection was confirmed when the participants underwent angiograms, the WSJ reported.
AI voice technology also could signal depression and help keep drivers awake at the wheel by engaging in conversations with them and analyzing whether they have a sleepy or low voice, or make sounds consistent with a yawn.
A major hurdle to this technology, though, is people’s hesitance to have their facial expressions and voices analyzed by AI. A survey of more than 4,000 respondents found more than half didn’t want AI constantly learning from them in this manner, according to the report.
Read the full story below: