Research  World-first tool reduces harmful engagement with AI-generated explicit images (Grok)

#1
C C Offline
https://www.eurekalert.org/news-releases/1112782

INTRO: A new evidence-based online educational tool aims to curb the watching, sharing, and creation of AI-generated explicit imagery.

Developed by researchers at University College Cork (UCC), the free 10-minute intervention Deepfakes/Real Harms is designed to reduce users’ willingness to engage with harmful uses of deepfake technology, including non-consensual explicit content.

In the wake of the ongoing Grok AI-undressing controversy, pressure is mounting on platforms, regulators, and lawmakers to confront the rapid spread of these tools. UCC researchers say educating internet users to discourage engagement with AI-generated sexual exploitation must also be a central part of the response.

False myths drive participation in non-consensual AI imagery. UCC researchers found that people’s engagement with non-consensual synthetic intimate imagery, often and mistakenly referred to as “deepfake pornography”, is associated with belief in six myths about deepfakes. These include myths such as the belief that the images are only harmful if viewers think they are real, or that public figures are legitimate targets for this kind of abuse.

The researchers found that completing the free, online 10-minute intervention, which encourages reflection and empathy with victims of AI imagery abuse, significantly reduced belief in common deepfake myths and, crucially, lowered users’ intentions to engage with harmful uses of deepfake technology.

Using empathy to combat AI imagery abuse at its source. The intervention has been tested with more than two thousand international participants of varied ages, genders, and levels of digital literacy, with effects evident immediately at a follow-up weeks later.

The intervention tool, called Deepfakes/Real Harms, is now freely available at https://www.ucc.ie/en/deepfake-real-harms/.

Lead researcher John Twomey, UCC School of Applied Psychology, said: “There is a tendency to anthropomorphise AI technology – blaming Grok for creating explicit images and even running headlines claiming Grok “apologised” afterwards. But human users are the ones deciding to harass and defame people in this manner. Our findings suggest that educating individuals about the harms of AI identity manipulation can help to stop this problem at source.” (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article What does sci publishing look like if every paper is AI-generated or co-authored? C C 0 377 Aug 4, 2025 07:00 PM
Last Post: C C
  Research World’s first non-silicon 2D computer developed C C 0 485 Jun 12, 2025 06:37 PM
Last Post: C C
  Research Social media’s fake news problem is the target of new AI tool developed at Concordia C C 0 490 Apr 9, 2025 02:22 AM
Last Post: C C
  Why AI generated hands are always screwed up Magical Realist 2 982 Oct 18, 2024 10:42 AM
Last Post: Syne
  Research AI models fed AI-generated data quickly spew nonsense (AI inbreeding) C C 0 591 Jul 27, 2024 02:26 AM
Last Post: C C
  Machine learning tool predicts early symptoms of schizophrenia in patient's relatives C C 0 548 Jan 26, 2021 11:32 PM
Last Post: C C
  Algorithms cannot contain a harmful artificial intelligence C C 1 569 Jan 11, 2021 05:57 PM
Last Post: stryder
  Using physiological cues to distinguish computer-generated faces from human ones C C 0 564 Jan 22, 2020 05:06 AM
Last Post: C C
  Microsoft Announces Tool To Catch Biased AI Because We Keep Making Biased AI C C 0 693 May 25, 2018 09:06 PM
Last Post: C C
  AI yields images of people's faces from just their genome + Allowing robots to feel C C 2 1,160 Sep 18, 2017 06:24 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)