Scivillage.com Casual Discussion Science Forum

Full Version: AI chatbots may worsen mental illness (new research)
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://www.eurekalert.org/news-releases/1117458

INTRO: People with mental illness who use AI chatbots risk experiencing a worsening of their condition. This is shown by a new study published in the international journal Acta Psychiatrica Scandinavica.

The researchers screened electronic health records from nearly 54,000 patients with mental illness and found several cases in which the use of AI chatbots appears to have had negative consequences – primarily in the form of worsened delusions, but also potential worsening of mania, suicidal ideation, and eating disorder.

"It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness," says Professor Søren Dinesen Østergaard from Aarhus University and Aarhus University Hospital, who leads the research group behind the study.

Chatbots confirm delusions. In their study, the researchers found examples of delusions that were likely worsened due to patients' interactions with AI chatbots. According to Søren Dinesen Østergaard, there is a logical explanation for this.

"AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia," he says.

Risky for people with severe mental illness. According to Søren Dinesen Østergaard, the study should prompt increased awareness among healthcare professionals working with mental illness. He believes they should discuss AI chatbot use with their patients.

"Despite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness – such as schizophrenia or bipolar disorder. I would urge caution here," he says... (MORE - details, no ads)
(Feb 23, 2026 10:37 PM)C C Wrote: [ -> ]...Chatbots confirm delusions. In their study, the researchers found examples of delusions that were likely worsened due topatients' interactions with AI chatbots. According to Søren Dinesen Østergaard, there is a logical explanation for this.

"AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia," he says.

That's exactly what the left does with the gender dysphoric.
Chatbots are 'constantly validating everything' even when you're suicidal. New research measures how dangerous AI psychosis really is
https://fortune.com/2026/03/07/chatbots-...ss-health/

INTRO: Artificial intelligence has rapidly moved from a niche technology to an everyday companion, with millions of people turning to chatbots for advice, emotional support, and conversation. But a growing body of research and expert testimony suggests that because chatbots are so sycophantic, and because people use them for everything, it may be contributing to an increase in delusional and mania symptoms in users with mental health.

A new study out of Aarhus University in Denmark shows increased use of chatbots may lead to worsening symptoms of delusions and mania in vulnerable communities. Professor Søren Dinesen Østergaard, one of the researchers on the study—which screened electronic health records from nearly 54,000 patients with mental illness—is warning AI chatbots are designed to target those most vulnerable.

“It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness,” Østergaard said in the study, released in February. His work builds on his 2023 study which found chatbots may cause a “cognitive dissonance [that] may fuel delusions in those with increased propensity towards psychosis.”

Other psychologists go deeper into the harms of chatbots, saying they were intentionally designed to always reaffirm the user—something particularly dangerous for those with mental health issues like mania and schizophrenia. “The chat bot confirms and validates everything they say. That is, we’ve never had something like that happen with people with delusional disorders, where somebody constantly reinforces them,” Dr. Jodi Halpern, UC Berkeley’s School of Public Health University chair and professor of bioethics, told Fortune.

Dr. Adam Chekroud, a psychiatry professor at Yale University and CEO of the mental health company Spring Health, went as far to call a chatbot “a huge sycophant” that is “constantly validating everything that people say back to it.” (MORE - details)
(Mar 7, 2026 06:40 PM)C C Wrote: [ -> ]Chatbots are 'constantly validating everything' even when you're suicidal. New research measures how dangerous AI psychosis really is
https://fortune.com/2026/03/07/chatbots-...ss-health/

(Sep 18, 2025 12:39 AM)Syne Wrote: [ -> ]I've still yet to see an "AI" that can be disagreeable enough to be convincingly conscious.