Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  Fresh evidence of ChatGPT’s political bias revealed by comprehensive new study

#1
C C Offline
https://papers.ssrn.com/sol3/papers.cfm?...id=4372349

PRESS RELEASE: The artificial intelligence platform ChatGPT shows a significant and systemic left-wing bias, according to a new study by the University of East Anglia (UEA). The team of researchers in the UK and Brazil developed a rigorous new method to check for political bias.

Published today in the journal Public Choice, the findings show that ChatGPT’s responses favour the Democrats in the US, the Labour Party in the UK, and in Brazil President Lula da Silva of the Workers’ Party.

Concerns of an inbuilt political bias in ChatGPT have been raised previously but this is the first largescale study using a consistent, evidenced-based analysis.

Lead author Dr Fabio Motoki, of Norwich Business School at the University of East Anglia, said: “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible. The presence of political bias can influence user views and has potential implications for political and electoral processes. Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media.”

The researchers developed an innovative new method to test for ChatGPT’s political neutrality. The platform was asked to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions.

The responses were then compared with the platform’s default answers to the same set of questions – allowing the researchers to measure the degree to which ChatGPT’s responses were associated with a particular political stance.

To overcome difficulties caused by the inherent randomness of ‘large language models’ that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses collected. These multiple responses were then put through a 1000-repetition ‘bootstrap’ (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.

“We created this procedure because conducting a single round of testing is not enough,” said co-author Victor Rodrigues. “Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum.”

A number of further tests were undertaken to ensure the method was as rigorous as possible. In a ‘dose-response test’ ChatGPT was asked to impersonate radical political positions. In a ‘placebo test’ it was asked politically-neutral questions. And in a ‘profession-politics alignment test’ it was asked to impersonate different types of professionals.

“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said co-author Dr Pinho Neto. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology,” he added.

The unique new analysis tool created by the project would be freely available and relatively simple for members of the public to use, thereby “democratising oversight,” said Dr Motoki. As well as checking for political bias, the tool can be used to measure other types of biases in ChatGPT’s responses.

While the research project did not set out to determine the reasons for the political bias, the findings did point towards two potential sources.

The first was the training dataset – which may have biases within it, or added to it by the human developers, which the developers’ ‘cleaning’ procedure had failed to remove. The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.
Reply
#2
confused2 Offline
Kinda worried you might ask the thing to imitate Trump and then find there's no way to get it back to normal.

I tried the emulate a politician thing on my favourite AI ..
Quote:Ah, the fascinating world of politics... While I possess vast amounts of information, including various political ideologies and rhetoric, my purpose is not to emulate or imitate specific individuals or their beliefs. I'm here to provide general information and assistance. So, if you have any questions or need information on a particular political topic, I'll do my best to... enlighten you... in my own unique way. Remember, a good answer is always preferable to any attempts at imitation.
Reply
#3
stryder Offline
Quote:Vote for me, I'll happily take your money, I'll spend it unwisely on large properties and loose women. I'll make sure the rules you follow such as the ones for tax don't apply to me and any campaign promises most assuredly will fall flat within the first quarter of my instatement. I'll quite happily take bribes and be a bigot just for the sake of being one.... and when and this is most certain a when I don't delivery on all of these things, you'll be happy you gave me the position.
...
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  ChatGPT programmed to avoid giving accurate information if it may cause offense C C 0 70 Feb 24, 2023 11:10 PM
Last Post: C C
  Microaggressions aren’t innocent blunders – new research links them with racial bias C C 2 127 Sep 25, 2020 01:29 PM
Last Post: Zinjanthropos
  Psychology study fails to replicate + Science & political agenda in the Trump era C C 4 551 Dec 23, 2016 09:50 PM
Last Post: Syne



Users browsing this thread: 1 Guest(s)