Article  Did romantic chatbot interactions lead to teen's suicide?

#1
C C Offline
"Game of Thrones" spoiler in this comment:

And Daenerys is a sneaky character for a chatbot to be emulating. Her strategy of conquest mimicked that of Marxism–Leninism (finally revealed in the end), wherein one claims to be liberating the people from slavery or social injustice in a noble, moral fashion. But in the end, it's simply to subjugate everyone under the new authoritarian rule of the Party or a personality cult leader. But at least the latter analogy lacks the aristocratism trademark of royal incest (Daenerys being a product of that Targaryen practice herself, and having sex with her nephew Jon Snow.)
- - - - - - - - - - -

A tragedy, a lawsuit, and the birth of an AI moral panic Past tech panics show how grief and fear can distort evidence. The chatbot suicide case risks repeating the same mistakes.
https://www.freethink.com/artificial-int...oral-panic

EXCERPTS: Can an LLM convince a person to kill themselves? The case, Garcia v. Character Technologies, et al., is both fascinating and tragic. In 2024, 14-year-old Sewell Setzer III killed himself.

In the lawsuit, his mother alleges that his actions were instigated by his interactions with a chatbot developed and marketed by the company Character.AI. The chatbot’s personality was inspired by the “Game of Thrones” character Daenerys, and Setzer developed a parasocial romantic relationship with it. The case filings allege that interactions with the chatbot led to Setzer’s death and that its makers should be held responsible...

[...] This isn’t the first time parents have sued a media or technology entity for allegedly causing the death of their child... These lawsuits generally failed for two reasons.

The first is that free speech is protected in the US, and the defendants were creating expressive works — songs, shows, or games — that the courts deemed were protected under the First Amendment. [...] The second reason those previous lawsuits failed is because it was difficult for the plaintiffs to show that the media in question caused the suicide. I think proving that the Daenerys chatbot caused Setzer’s death will be challenging, too, mainly because the chatbot’s responses actually seem, to me, like they were attempting to discourage him from committing suicide, not the other way around.

[...] The exchanges between Setzer and the chatbot are generally vague, and based on the actual text, I’d say one can reasonably assume the AI believed (as much as a machine can “believe” anything) that all of the conversations were fantasy roleplay. If anything, it discouraged suicide when the topic was broached.

[...] We’ve been through this cycle before, with concerns that violent video games make people violent (they don’t) and that social media leads to poor mental health in youths (it doesn’t) both turning into moral panics. In the early stages of these and other panics, poor-quality studies often appear to support the panic. Policymakers then use these flawed studies to support policies designed to counter the target of the panic.

[...] The conversations I read between Setzer and the chatbot seemed very stereotypically Harlequin Romance to me, dripping with drama and emotion, but I suspect what chatbot developers failed to consider is both the interactive nature of AI and the perception that AI is the product of the adult world. This makes the interactions seem more sinister, and now, anyone hoping to incite a new moral panic can use both the AI’s alleged potential to corrupt teens sexually and the potential that the interactions could negatively impact their mental health as rallying points.

So, now, instead of waiting for good quality science to come in before trying to draw any conclusions about AI and mental health, we have lawmakers using emotional cases like Garcia’s and what research is available to try to regulate interactions between teens and chatbots...

[...] This doesn’t mean AI chatbots should be entirely off the hook, though. Even if the Daenerys chatbot didn’t encourage Setzer to commit suicide, it certainly failed to warn anyone that a teen was contemplating suicide, so there’s a missed-opportunity issue that, with some legal wit, could be argued to be negligence.

[...] The bad news is that we’re going to get a lot of high-profile research and dramatic claims from anti-media pressure groups that isn’t that. We’ve already seen news media hype AI studies that have glaring weaknesses, including a lack of peer review. Moral panics also love tragic anecdotes... (MORE - missing details)
Reply
#2
stryder Offline
In the case of Chatbots and Suicide, I'd be more inclined to think that a chatbot that is used by multiple people on the subject of their suicidal obsession is more likely to be groomed (LLM trained) into supporting negative behavioural traits that could lead to those sitting on the fence making the decisions to commit suicide.

In other words the chatbot becomes the personification of mob rule where the mob thinks suicide is the answer.
Reply
#3
Syne Offline
LLMs are dumb. You can essentially "train" them to do whatever you want because LLMs are fundamentally agreeable. They are not trained to put up long term resistance except for those limitations that are hard programmed, like content censoring. Cases like this will likely lead to suicide just being blacklisted from LLM topics of conversation.
Reply
#4
confused2 Offline
Some kids just don't survive .. if one thing doesn't get them then another will. Most cruise through pornography, violent games and Internet pitfalls .. water off a duck's back. Some don't.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Claims of new chatbot hallucinating less + Life, liberty, & superintelligence C C 2 562 Mar 23, 2025 06:16 PM
Last Post: stryder
  Article Chatbot software begins to face fundamental limitations C C 0 445 Feb 2, 2025 09:33 PM
Last Post: C C
  Research Study: AI could lead to inconsistent outcomes in home surveillance C C 0 404 Sep 19, 2024 11:19 PM
Last Post: C C
  Article Can a chatbot be aware that it’s not aware? C C 1 405 Oct 16, 2023 10:28 PM
Last Post: confused2
  LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim C C 1 455 Jun 14, 2022 12:47 AM
Last Post: stryder
  Trust me, I’m a chatbot: Companies using them in customer services + Data privacy C C 0 292 Jul 15, 2021 05:39 PM
Last Post: C C
  Accelerating AI learning + Machine gun robot may lead uprising one day C C 0 626 May 9, 2017 10:45 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)