Research  AI chatbots may worsen mental illness (new research)

#1
C C Offline
https://www.eurekalert.org/news-releases/1117458

INTRO: People with mental illness who use AI chatbots risk experiencing a worsening of their condition. This is shown by a new study published in the international journal Acta Psychiatrica Scandinavica.

The researchers screened electronic health records from nearly 54,000 patients with mental illness and found several cases in which the use of AI chatbots appears to have had negative consequences – primarily in the form of worsened delusions, but also potential worsening of mania, suicidal ideation, and eating disorder.

"It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness," says Professor Søren Dinesen Østergaard from Aarhus University and Aarhus University Hospital, who leads the research group behind the study.

Chatbots confirm delusions. In their study, the researchers found examples of delusions that were likely worsened due to patients' interactions with AI chatbots. According to Søren Dinesen Østergaard, there is a logical explanation for this.

"AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia," he says.

Risky for people with severe mental illness. According to Søren Dinesen Østergaard, the study should prompt increased awareness among healthcare professionals working with mental illness. He believes they should discuss AI chatbot use with their patients.

"Despite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness – such as schizophrenia or bipolar disorder. I would urge caution here," he says... (MORE - details, no ads)
Reply
#2
Syne Offline
(Feb 23, 2026 10:37 PM)C C Wrote: ...Chatbots confirm delusions. In their study, the researchers found examples of delusions that were likely worsened due topatients' interactions with AI chatbots. According to Søren Dinesen Østergaard, there is a logical explanation for this.

"AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia," he says.

That's exactly what the left does with the gender dysphoric.
Reply
#3
C C Offline
Chatbots are 'constantly validating everything' even when you're suicidal. New research measures how dangerous AI psychosis really is
https://fortune.com/2026/03/07/chatbots-...ss-health/

INTRO: Artificial intelligence has rapidly moved from a niche technology to an everyday companion, with millions of people turning to chatbots for advice, emotional support, and conversation. But a growing body of research and expert testimony suggests that because chatbots are so sycophantic, and because people use them for everything, it may be contributing to an increase in delusional and mania symptoms in users with mental health.

A new study out of Aarhus University in Denmark shows increased use of chatbots may lead to worsening symptoms of delusions and mania in vulnerable communities. Professor Søren Dinesen Østergaard, one of the researchers on the study—which screened electronic health records from nearly 54,000 patients with mental illness—is warning AI chatbots are designed to target those most vulnerable.

“It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness,” Østergaard said in the study, released in February. His work builds on his 2023 study which found chatbots may cause a “cognitive dissonance [that] may fuel delusions in those with increased propensity towards psychosis.”

Other psychologists go deeper into the harms of chatbots, saying they were intentionally designed to always reaffirm the user—something particularly dangerous for those with mental health issues like mania and schizophrenia. “The chat bot confirms and validates everything they say. That is, we’ve never had something like that happen with people with delusional disorders, where somebody constantly reinforces them,” Dr. Jodi Halpern, UC Berkeley’s School of Public Health University chair and professor of bioethics, told Fortune.

Dr. Adam Chekroud, a psychiatry professor at Yale University and CEO of the mental health company Spring Health, went as far to call a chatbot “a huge sycophant” that is “constantly validating everything that people say back to it.” (MORE - details)
Reply
#4
Syne Offline
(Mar 7, 2026 06:40 PM)C C Wrote: Chatbots are 'constantly validating everything' even when you're suicidal. New research measures how dangerous AI psychosis really is
https://fortune.com/2026/03/07/chatbots-...ss-health/

(Sep 18, 2025 12:39 AM)Syne Wrote: I've still yet to see an "AI" that can be disagreeable enough to be convincingly conscious.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Government censorship of Chinese chatbots C C 0 15 Feb 18, 2026 01:49 AM
Last Post: C C
  Research AI chatbots can effectively sway voters – in either political direction C C 0 217 Dec 5, 2025 01:31 AM
Last Post: C C
  Research Lay intuition as effective at jailbreaking AI chatbots as technical methods C C 0 323 Nov 5, 2025 01:23 AM
Last Post: C C
  Research AI chatbots can run crazy with medical misinformation; need for safeguards C C 1 643 Aug 7, 2025 12:08 AM
Last Post: confused2
  Article Billionaires convince themselves AI chatbots are close to new scientific discoveries C C 0 613 Jul 21, 2025 04:52 PM
Last Post: C C
  Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones C C 0 664 May 11, 2024 07:35 PM
Last Post: C C
  Research New theory suggests chatbots can understand text C C 1 619 Jan 24, 2024 01:49 AM
Last Post: confused2
  Verbal nonsense reveals limitations of AI chatbots + Robot consensus + AI outperforms C C 1 568 Sep 15, 2023 11:52 PM
Last Post: confused2



Users browsing this thread: 1 Guest(s)