Research  AI chatbots can effectively sway voters – in either political direction

#1
C C Offline
https://www.eurekalert.org/news-releases/1107627

PRESS RELEASE: A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell University research finds.

The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points or more in many cases. The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions.

“LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand, professor of information science and marketing and management communications, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.”

The researchers reported these findings in two papers published simultaneously, “Persuading Voters using Human-AI Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational AI,” in Science. The papers are under embargo until Thursday December 4, 2025 at 2pm ET.

In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed AI chatbots to change voters’ attitudes regarding presidential candidates. They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election.

They found that two months before the U.S. election, among more than 2,300 Americans, chatbots focused on the candidates’ policies caused a modest shift in opinions. On a 100-point scale, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris – an effect roughly four times larger than traditional ads tested during the 2016 and 2020 elections. The pro-Trump AI model moved likely Harris voters 1.51 points toward Trump.

In similar experiments with 1,530 Canadians and 2,118 Poles, the effect was much larger: Chatbots moved opposition voters’ attitudes and voting intentions by about 10 percentage points. “This was a shockingly large effect to me, especially in the context of presidential politics,” Rand said.

Chatbots used multiple persuasion tactics, but being polite and providing evidence were most common. When researchers prevented the model from using facts, it became far less persuasive – showing the central role that fact-based claims play in AI persuasion.

The researchers also fact-checked the chatbot’s arguments using an AI model that was validated using professional human fact-checkers. While on average the claims were mostly accurate, chatbots instructed to stump for right-leaning candidates made more inaccurate claims than those advocating for left-leaning candidates in all three countries. This finding – which was validated using politically balanced groups of laypeople – mirrors the often-replicated finding that social media users on the right share more inaccurate information than users on the left, Pennycook said.

In the Science paper, Rand collaborated with colleagues at the UK AI Security Institute to investigate what makes these chatbots so persuasive. They measured the shifts in opinions of almost 77,000 participants from the U.K. who engaged with chatbots on more than 700 political issues... (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Lay intuition as effective at jailbreaking AI chatbots as technical methods C C 0 174 Nov 5, 2025 01:23 AM
Last Post: C C
  Research AI chatbots can run crazy with medical misinformation; need for safeguards C C 1 579 Aug 7, 2025 12:08 AM
Last Post: confused2
  Article Billionaires convince themselves AI chatbots are close to new scientific discoveries C C 0 558 Jul 21, 2025 04:52 PM
Last Post: C C
  Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones C C 0 611 May 11, 2024 07:35 PM
Last Post: C C
  Research New theory suggests chatbots can understand text C C 1 538 Jan 24, 2024 01:49 AM
Last Post: confused2
  Verbal nonsense reveals limitations of AI chatbots + Robot consensus + AI outperforms C C 1 505 Sep 15, 2023 11:52 PM
Last Post: confused2



Users browsing this thread: 1 Guest(s)