News Story from dana.org
Over the past few years, America has lost several celebrities, including actor/comedian Robin Williams and fashion designer Kate Spade, to suicide. It’s not a surprise: Suicide rates have been increasing across the board in the United States. According to the National Institute of Mental Health (NIMH), 1.3 million people in the US attempted suicide in 2016 – and nearly 45,000 died. This is nearly a 25 percent increase from the numbers posted in 2000.
To help combat what is being called a problem of epidemic proportions, the Mental Health Research Network, led by researchers at Kaiser Permanente, has developed a computer model based on data collected during outpatient visits to help identify which patients may be at the most risk for killing themselves.
Building on military and veteran models
In 2009, the U.S. Army, in partnership with NIMH, launched the Army Study to Assess Risk and Resilience in Servicemembers (STARRS) project, aiming to stem the increasing rate of suicides in the rank and file (See “New Army Risk and Resilience Project Searches for Signs of Potential Suicide”). The consortium of military and civilian scientists spent years using big data techniques to create a model to help predict which service members would be at highest risk. They succeeded: Their model was good enough that the Veterans Administration (VA) soon got on board to create its own big data model to target the veteran population, said Michael Schoenbaum, a NIMH scientist who collaborated on the Army STARRS project.
“STARRS demonstrated that you could do this kind of predictive modeling for what, really, is a quite rare event. Because suicide remains a fairly rare event,” he says. “NIMH then partnered with the VA, using statistical methods to analyze electronic health record (EHR) data to identify small groups within the larger veteran population that had a high predicted risk for suicide.”
That project, REACHVet, uses about 40 different predictors to help the VA identify patients who may need additional support. It is in active use today.
But the Army had vast amounts of data on each servicemember beyond information collected in a typical patient EHR (electronic health record), including genetic information, criminal justice history, and educational data. That offered the researchers a lot more to work with when using machine learning techniques to create their model. Similarly, the VA has extensive medical and military histories on its patient population. How might researchers go about designing a predictive model that can be applied to everyday people, civilians in the general population who are at heightened risk of taking their own lives?
Multiple sources of data
Gregory Simon, a psychiatrist and senior investigator at Kaiser Permanente Washington Health Research Institute, hoped to create such a model. He and his colleagues at the Mental Health Research Network decided to focus on the last five years of patient visits, including self-reported patient questionnaires, to help inform their model.
“We demonstrated that we could use this data to help identify at-risk patients at about 85 percent,” says Simon. “A lot of the things we found were similar, but not 100 percent identical, to the what the Army and VA have found in their models. Things like the diagnosis received, the prescriptions patients have filled, and so on are important. But the questionnaire data was also predictive.”
More specifically, the group’s algorithm found that factors including prior suicide attempts, psychiatric medications dispensed, mental health and substance abuse diagnoses, emergency room visits, and scores on a standardized depression questionnaire, among others, were the strongest predictors of whether a patient was at risk. The results were published in the May 24, 2018, issue of the American Journal of Psychiatry.
“The model we developed to predict suicide attempts after a mental health visit includes about a hundred different things,” Simon says. “And if you look at that list of a hundred things, any mental health provider would look at each one individually and say, ‘Makes sense.’ But while human beings know these factors are important, they can’t do what these models do, which is to keep track of everything that’s happened in someone’s medical record for the past five years, and efficiently and accurately do the math to calculate out an individual patient’s risk. That’s why having these models is so important to help providers identify the patients who need the most help.”
While Schoenbaum says that Simon’s model is a good step forward, he cautions that many civilian health systems and insurance companies may not track the data they would need to successfully create and use predictive models.
“The prerequisite for developing a good predictive model of suicide death is having access to data that contains the data both on possible predictors of suicide risk and actual outcomes,” he says. “You need to know what members of your population have gone on to die by suicide—because your formula is basically the way to distinguish those who are at risk and those who aren’t. Right now, too many organizations just don’t track suicide deaths.”
Ronald Kessler, a professor of health care policy at Harvard Medical School and a member of the STARRS team, adds that while it’s helpful to identify which patients are at greatest risk for suicide death, organizations also need to consider what kind of interventions they can offer once those people are detected. He believes that big data also has much to offer in that regard.
“It’s not enough to just say, ‘Is this person suicidal?’ You also need to know how to reach out to that person and find the treatment that will do the most good,” he says. “We’re now trying to develop machine learning models to pick the right treatment for individual patients. We are shifting from ‘Who’s at risk?’ to ‘What’s the optimal treatment for this person in this situation?’ That’s what we need to do.”
Simon agrees. He is optimistic that big data methods will help get us there.
“We’re learning that suicidal behavior is much more predicable than most people think. Certainly, it’s more predictable than what mental health professionals used to think,” he says. “We now have information that is accurate enough to act on. What we need now is to start developing follow-up programs to clarify what the right actions are once we have identified at-risk patients. And, most importantly, we have to hold ourselves accountable for actually doing them.”