ALGORITHM: New artificial intelligence (AI) developments have found a way to predict the odds of an individual committing suicide.
By Andrew Carlson, Staff Editor
Imagine if every time you checked your phone, someone in the U.S. committed suicide. It’s not as absurd as it sounds. On average, Americans check their phones between 80 and 140 times each day. Suicide? An average of 123 deaths a day. And for every “successful” suicide, 25 more attempt. Suicide is the tenth leading cause of death in our country, and an astounding majority of doctors are no better at assessing suicidal patients than they were 40 years ago. In a study conducted by Australian researcher Matthew Large, he and his team found that 95% of “high risk” patients do not commit suicide at all, and 50% of suicide patients come from the “low risk” category. This appalling amount of misdiagnoses explains why a suicide leaves behind a whirlwind of “why’s?”
It’s clear that a better method is necessary to accurately identify and prevent suicides. Colin G. Walsh, Assistant Professor of Biomedical Informatics, Medicine, and Psychiatry at Vanderbilt University believes he has found a desperately needed suicide solution. In 2017, data researchers sought to overcome the prevalent limitations of suicide risk assessment in traditional medicine and created a learning algorithm designed to predict future suicide attempts. Researchers looked at 5,167 adult patients who had been previously admitted to the hospital due to suicide attempts or self harm. Using hospital records and factors such as age, gender, zip code, medication prescriptions, and diagnostic history, the machine-learning algorithm predicted the odds of an individual taking their own life. It was 84% accurate in predicting whether someone would attempt suicide in the next week, and 80% accurate at predicting suicide in the next two years. Walsh wants to see widespread incorporation of this program into hospitals and mental health clinics nationwide in the next two years. He acknowledges that while the algorithm is not perfect yet, it will improve with exposure to more medical data.
This new method of suicide risk diagnosis also raises never-before asked ethical questions. Would all patients admitted into the hospital be tested with the algorithm? Would they be told this is happening? Who would have access to the hospital data used to access risk? Should a machine override a doctor, or should a doctor override a machine?
With such an envenoming epidemic plaguing our peers, it is essential we take every precaution for all patients. Many suicide victims never ask for help, and “People who are very depressed can be very good actors,” Warren Taylor, a consulting psychiatrist in the study, said. This new test, when clinic-ready, should be administered to all patients, and the information and treatment options made readily available. Doctor and AI collaboration could reduce the amount of unneeded deaths happening around the country. The algorithm will inform doctors about possible risk, but it is up to doctors, psychiatrists, family, and friends to help those at risk of self-harm. Empathy ultimately saves lives, not strictly a machine spitting out numbers.