Jan 16, 2026 04:05 AM
(This post was last modified: Jan 16, 2026 04:09 AM by C C.)
https://www.eurekalert.org/news-releases/1112782
INTRO: A new evidence-based online educational tool aims to curb the watching, sharing, and creation of AI-generated explicit imagery.
Developed by researchers at University College Cork (UCC), the free 10-minute intervention Deepfakes/Real Harms is designed to reduce users’ willingness to engage with harmful uses of deepfake technology, including non-consensual explicit content.
In the wake of the ongoing Grok AI-undressing controversy, pressure is mounting on platforms, regulators, and lawmakers to confront the rapid spread of these tools. UCC researchers say educating internet users to discourage engagement with AI-generated sexual exploitation must also be a central part of the response.
False myths drive participation in non-consensual AI imagery. UCC researchers found that people’s engagement with non-consensual synthetic intimate imagery, often and mistakenly referred to as “deepfake pornography”, is associated with belief in six myths about deepfakes. These include myths such as the belief that the images are only harmful if viewers think they are real, or that public figures are legitimate targets for this kind of abuse.
The researchers found that completing the free, online 10-minute intervention, which encourages reflection and empathy with victims of AI imagery abuse, significantly reduced belief in common deepfake myths and, crucially, lowered users’ intentions to engage with harmful uses of deepfake technology.
Using empathy to combat AI imagery abuse at its source. The intervention has been tested with more than two thousand international participants of varied ages, genders, and levels of digital literacy, with effects evident immediately at a follow-up weeks later.
The intervention tool, called Deepfakes/Real Harms, is now freely available at https://www.ucc.ie/en/deepfake-real-harms/.
Lead researcher John Twomey, UCC School of Applied Psychology, said: “There is a tendency to anthropomorphise AI technology – blaming Grok for creating explicit images and even running headlines claiming Grok “apologised” afterwards. But human users are the ones deciding to harass and defame people in this manner. Our findings suggest that educating individuals about the harms of AI identity manipulation can help to stop this problem at source.” (MORE - details)
INTRO: A new evidence-based online educational tool aims to curb the watching, sharing, and creation of AI-generated explicit imagery.
Developed by researchers at University College Cork (UCC), the free 10-minute intervention Deepfakes/Real Harms is designed to reduce users’ willingness to engage with harmful uses of deepfake technology, including non-consensual explicit content.
In the wake of the ongoing Grok AI-undressing controversy, pressure is mounting on platforms, regulators, and lawmakers to confront the rapid spread of these tools. UCC researchers say educating internet users to discourage engagement with AI-generated sexual exploitation must also be a central part of the response.
False myths drive participation in non-consensual AI imagery. UCC researchers found that people’s engagement with non-consensual synthetic intimate imagery, often and mistakenly referred to as “deepfake pornography”, is associated with belief in six myths about deepfakes. These include myths such as the belief that the images are only harmful if viewers think they are real, or that public figures are legitimate targets for this kind of abuse.
The researchers found that completing the free, online 10-minute intervention, which encourages reflection and empathy with victims of AI imagery abuse, significantly reduced belief in common deepfake myths and, crucially, lowered users’ intentions to engage with harmful uses of deepfake technology.
Using empathy to combat AI imagery abuse at its source. The intervention has been tested with more than two thousand international participants of varied ages, genders, and levels of digital literacy, with effects evident immediately at a follow-up weeks later.
The intervention tool, called Deepfakes/Real Harms, is now freely available at https://www.ucc.ie/en/deepfake-real-harms/.
Lead researcher John Twomey, UCC School of Applied Psychology, said: “There is a tendency to anthropomorphise AI technology – blaming Grok for creating explicit images and even running headlines claiming Grok “apologised” afterwards. But human users are the ones deciding to harass and defame people in this manner. Our findings suggest that educating individuals about the harms of AI identity manipulation can help to stop this problem at source.” (MORE - details)
