Article  Feral AI gossip with the potential to spread damage & shame will become more frequent

#1
C C Offline
https://link.springer.com/article/10.100...25-09871-0

PRESS RELEASE: “Feral” gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.

Chatbots like ChatGPT, Claude, and Gemini don't just make things up—they generate and spread gossip, complete with negative evaluations and juicy rumours that can cause real-world harm, according to new analysis by philosophers Joel Krueger and Lucy Osler from the University of Exeter.

The harm caused by AI gossip isn’t a hypothetical threat. Real-world cases of AI gossip already exist. After publishing an article about how emotionally manipulative chatbots can be, the New York Times reporter Kevin Roose found out chatbots were describing his writing as sensational and accusing him of poor journalistic ethics and being unscrupulous. Other AI bots have falsely detailed people’s involvement in bribery, embezzlement, and sexual harassment. These gossipy AI-generated outputs cause real-world harms—reputational damage, shame, and social unrest.

The study outlines how chatbots gossip, both to human users and other chatbots, but in a different way to humans. This can lead to harm which is potentially wider in scope than fake information spread by chatbots.

Bot-to-bot gossip is particularly dangerous because it operates unconstrained by the social norms that moderate human gossip. It continues to embellish and exaggerate without being checked, spreading quickly in the background, making its way from one bot to the next and inflicting significant harms.

Dr Osler said: “Chatbots often say unexpected things and when chatting with them it can feel like there’s a person on the other side of the exchange. This feeling will likely be more common as they become even more sophisticated.

“Chatbot “bullshit” can be deceptive — and seductive. Because chatbots sound authoritative when we interact with them — their dataset exceeds what any single person can know, and false information is often presented alongside information we know is true — it’s easy to take their outputs at face value.

“This trust can be dangerous. Unsuspecting users might develop false beliefs that lead to harmful behaviour or biases based upon discriminatory information propagated by these chatbots.”

The study shows how the drive to increasingly personalise chatbots could be led by the hope that we’ll become more dependent on these systems and give them greater access to our lives. It’s also done to intensify our feeling of trust and drive us to develop increasingly rich social relationships with them.

Dr Krueger said: “Designing AI to engage in gossip is yet another way of securing increasingly robust emotional bonds between users and their bots.

“Of course, bots have no interest in promoting a sense of emotional connection with other bots, since they don’t get the same “kick” out of spreading gossip the way humans do. But certain aspects of the way they disseminate gossip mirror the connection-promoting qualities of human gossip while, simultaneously making bot-to-bot gossip potentially even more pernicious than gossip involving humans.”

The researchers predict that user-to-bot gossip may become more common. In these cases, users might seed bots with different nuggets of gossip knowing the latter will, in turn, rapidly disseminate them in its characteristically feral way. Bots might therefore act as intermediaries, responding to user-seeded gossip and rapidly spreading it to others.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Predicting people's taste in art + AI model probes virus spread from animal to human C C 0 428 Jun 15, 2021 06:37 PM
Last Post: C C
  Supercomputer finds 77 drugs that could halt coronavirus spread C C 0 473 Mar 20, 2020 09:32 PM
Last Post: C C
  Potential EV scams + MS' quantum programming language + Will AI become conscious? C C 11 3,009 Dec 17, 2017 08:44 PM
Last Post: Yazata
  Microsoft shelves Feb security updates + Smartphone CPUs put desktops to shame + RF C C 0 606 Feb 15, 2017 06:28 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)