Will Artificial Intelligence Make Psychiatry Better?

Batya Swift Yasgur, MA, LSW

February 25, 2020

Artificial intelligence (AI) is garnering increasing attention and popularity across medicine, and psychiatry is no exception. However, whether AI will live up to its promise and improve the diagnosis and treatment of mental illness remains to be seen.

AI is the general concept of creating expert systems to carry out various kinds of tasks, and machine learning (ML) is a subset of AI, with statistical systems that learn features of the data that are predictive of some variable of interest, explained Peter Foltz, PhD, a research professor at the University of Colorado Institute of Cognitive Science in Boulder.

One reason for the growing interest in AI is the "enormous range" of potential applications it offers, said John Krystal, MD, professor and chair, department of psychiatry at Yale University School of Medicine, New Haven, Connecticut.

"AI can help us better analyze brain scans in research, better understand the structure of genomic information, identify people at risk for bad outcomes like suicide, try to improve the way we allocate resources, predict who will and won't benefit by enriched psychosocial support, guide medication selection, and remotely monitor patients at risk," Krystal told Medscape Medical News.

While some doctors are concerned that AI and ML may ultimately replace them, Foltz noted that AI is not intended to supplant physicians, but to augment clinical practice.

A recent survey of 791 psychiatrists from 22 countries showed that 50% of respondents predicted their jobs would be "substantially changed" by AI/ML — particularly when it comes to documenting and updating medical records and synthesizing information.

"We need to look at these kinds of technologies as tools that will convey more timely and sensitive information to clinicians and alert clinicians to patients who might need additional follow-up," said Krystal.

"A good example of things that ML or AI can do better than humans is to predict a response to drug A vs drug B, because a computer can keep in mind far more variables than a person can and can do so extremely efficiently when using validated algorithms," he noted.

Data collected through routine depression screening, "enriched with information the patient might provide to a clinician, would be put into a format that is standardized so that the same data is provided by all patients can be used in an algorithm."

Krystal and colleagues developed such an algorithm to predict which patients with depression might achieve symptomatic remission after a 12-week course of escitalopram.

The 2016 study, published in the Lancet Psychiatry, used data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial to develop an algorithm consisting of 25 predictive variables, including particular sociodemographic features, scores on depressive severity checklists, and number of previous major depressive episodes.

These 25 variables were used to train an ML model to predict clinical remission. The researchers then validated the model by applying it to an escitalopram treatment group from an independent clinical trial known as Combining Medications to Enhance Depression Outcomes (CO-MED) study.

The model they developed predicted outcomes in the STAR*D cohort "with accuracy significantly above chance" (64.6% [SD 3.2]; P < .0001) and was externally validated in the CO-MED escitalopram treatment group.

These results were replicated in a subsequent study where investigators used ML to analyze the ability of 20 depressive symptoms to predict response to antidepressants, using data from nine antidepressant clinical trials and more than 7000 patients.

The investigators found that three symptom clusters — sleep/insomnia, core emotional symptoms, and atypical symptoms — were associated with response to specific medications.

This is an "example of how AI has the potential to make a good clinician even better," Krystal said.

Real-Time Assessment

Foltz and colleagues have developed an app-based technology to remotely monitor and track subtle changes in the speech patterns of patients with schizophrenia.

Tracking these subtle language changes can help clinicians obtain "real-time objective assessment[s]" of patients' fluctuations in speech and mental health, which can suggest potentially concerning shifts in mood or thought process, said Brita Elvevåg, PhD, a cognitive neuroscientist at the University of Tromsø, Norway, who collaborated with Foltz on the research.

Foltz, Elvevåg, and colleagues have conducted a number of studies that support this methodology.

One study compared healthy participants to those with stable mental illness (n = 120 and n = 105, respectively). Both groups used a mobile app to retell stories one day after hearing them in a laboratory, with the retelling taking place outside the laboratory setting.

Automatic speech recognition tools that tracked language longitudinally extracted language-based features.

The findings of the AI model were then compared to clinician evaluations.

The AI model was as accurate as clinician evaluation in identifying patients with schizophrenia who might be showing worrisome symptoms and differentiating psychiatric patients from healthy individuals.

Foltz noted that in psychiatry, diagnosis relies heavily on speaking with patients face to face. This process, he added, is "very time-consuming," costly, and access to in-person psychiatric consultation can be difficult for many patients, especially those in rural areas.

Such tools, he said, have the potential to collect data remotely, quickly transmit the information to clinicians, and provide clues that can be used to see which patients might need in-person assessment.

However, despite their promise, AI technologies are not yet ready for prime time and need more evaluation before they can be applied at a population level.

Such technologies "need to be evaluated more carefully before they can be applied at large, commercial scales," said Foltz.

To that end, a recent paper published by his group suggests a framework for evaluating AI — particularly ML — in psychiatry.

A Matter of Trust

Chelsea Chandler, the study's lead author and a doctoral candidate at CU Boulder, said its aim is to "lay out the current shortcomings in regard to developing AI approaches in hopes that moving forward, they will be generalizable, transparent, and trustworthy."

Elvevåg, who co-authored the paper, elaborated.

"For AI to be deployed in the clinical setting, practicing clinicians should be involved in its development," she said, noting that some of her group's approaches were informed by a survey of US-based clinicians specializing in risk assessment of severely mentally ill patients.

These tools should ease clinical work, not complicate it.

"No one will be interested in, or use a tool that would add more work to a clinician's already packed schedule," said Elvevåg.

Moreover, said Foltz, before deploying AI techniques, "clinicians must have a sufficient level of trust" in the accuracy of AI/ML predictions.

To be trustworthy, an ML model must be "explainable," said Foltz. That is, "it must be possible to obtain a description of the reason a model arrived at a decision and must be transparent."

"The clinician who uses [the tool] should understand that the results it gives tie in to the information the clinician should care about, so they have to also understand the details of the system and when the system can be appropriately used and when it shouldn't be used," Foltz noted.

The third component of "trustworthiness" is "generalizability."

"AI might sound like a 'brave new world,' but before people get too excited about its role in transforming psychiatry, it is important to know whether a given model can be generalized, so we have to think about the training set on which the model has been trained," said Elvevåg.

She noted, "much of behavioral science is based on populations in our databases, which are usually based on people in the Western world who are educated, industrialized, and democratic, thus introducing biases into the training set, so we need to be sure that any model is applicable to other populations, too."

Krystal agreed. "When people develop models but don't adequately validate them, we don't know how stable or reliable the models are, and how they do or don't generalize to in other datasets."

"Having the wrong or poor model and acting on it can sometimes lead to drastically bad decision making, as opposed to improvement in medical decision making," he said.

Legal, Ethical "Minefield"

Elvevåg described the potential legal issues surrounding AI in psychiatry as a "minefield."

"What would happen if one of these systems was responsible for missing a suicide or, conversely, if the clinician or hospital is alerted too many times and the patient is sent [to the clinician or hospital] too often? This increases the burden on the patient, the family, the clinician, and the healthcare system," she said.

Some patients, such as those with schizophrenia or cognitive impairments, may also have difficulty using apps and devices.

Additionally, new technologies have the potential for serious data breaches and violation of patients' privacy.

The data "could be floating somewhere in the stratosphere, so we need to have debate about the right framework for all the different stages of data collection and how it is used for research, clinical practice, and building ML models," she said.

"Lawyers and ethicists have different angles on these issues than clinicians or research scientists do, so everyone, including computer scientists, should weigh in on the implications of these technologies," she said.

Krystal agreed. "The more kinds of data we use to generate increasingly informative predictions, which are really more meaningful ways to guide treatment, the more we'll have to ask questions about who owns the data, how the data is going to be used, how outcomes of models are going to be used, and who will use them."

To that end, the National Institutes of Health (NIH) held a multidisciplinary conference in July 2019, with experts in law, ethics, and research contributing their perspectives — an event that Elvevåg described as "brilliant" in advancing the discussions that need to take place around these issues.

Proceed With Caution

Elvevåg is "enormously optimistic" that the information gleaned from AI can "potentially help in complicated cases and provide much-needed empirical data for understanding the nature of [psychiatric] conditions." Nevertheless, "we must proceed with caution," she said.

Despite risks on the technical side coupled with "thorny issues from the privacy and ethical side, "everyone believes there is enormous potential," Krystal added.

However, these concerns "do not negate the promises that are present in applying novel approaches to existing types of data we already have and new types of data that we know are out there but haven't been well integrated yet."

The "technical and ethical sides of AI-based approaches must move forward together at the same time, so when AI approaches are rolled out, we need to make sure that the ethical side is implemented right along with them," Krystal said.

Chandler, Foltz, and Elvevåg have disclosed no relevant financial relationships. Krystal is the editor of Biological Psychiatry. He consults for AbbVie Inc, AMGEN, Astellas Pharma Global Development Inc, AstraZeneca Pharmaceuticals, Biomedisyn Corporation, Bristol-Myers Squibb, Eli Lilly and Co, Euthymics Bioscience Inc, Neurovance Inc (a subsidiary of Euthymics Bioscience), Forum Pharmaceuticals, Janssen Research & Development, Lundbeck Research USA, Novartis Pharma AG, Otsuka America Pharmaceutical Inc, Sage Therapeutics Inc, Sunovion Pharmaceuticals Inc, and Takeda Industries. His other disclosures are listed on the original paper.

Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology. 2019;June:137-147. Full text

Schizophr Bull. Published online November 1, 2019. Abstract

Lancet Psychiatry. 2016;3:243-50. Abstract

JAMA Psychiatry. 2017;74(4):370-378. Full text

For more Medscape Psychiatry news, join us on Facebook and Twitter.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....