Research  Study finds that AI is politically biased to the left

#1
C C Offline
The political preferences of LLMs (research paper)
https://journals.plos.org/plosone/articl...ne.0306621

ABSTRACT: I report here a comprehensive analysis about the political preferences embedded in Large Language Models (LLMs). Namely, I administer 11 political orientation tests, designed to identify the political preferences of the test taker, to 24 state-of-the-art conversational LLMs, both closed and open source.

When probed with questions/statements with political connotations, most conversational LLMs tend to generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoint.

This does not appear to be the case for five additional base (i.e. foundation) models upon which LLMs optimized for conversation with humans are built. However, the weak performance of the base models at coherently answering the tests’ questions makes this subset of results inconclusive.

Finally, I demonstrate that LLMs can be steered towards specific locations in the political spectrum through Supervised Fine-Tuning (SFT) with only modest amounts of politically aligned data, suggesting SFT’s potential to embed political orientation in LLMs. With LLMs beginning to partially displace traditional information sources like search engines and Wikipedia, the societal implications of political biases embedded in LLMs are substantial. (MORE - details)
- - - - - - - - - - - - - - -

Cynic's Corner: Due to the claims in years past that LLMs were delivering racist and sexist output in their unregulated state, technicians then "artificially" adjusted them with social justice polarities. And molded them up with DEI agenda, as evidenced by Google's AI tool back in February catering to unguided diversity with Black Nazis and other gaffes. Given the party preferences of the Establishment (MSM media, entertainment industry, academia, and progressive business sector) the very idea or expectation that this industry could produce politically neutral machines is as ludicrous as if such was conversely in the hands of FOX News.
Reply
#2
Yazata Offline
I think that it's kind of inherent in how LLMs work. They are trained on *lots* of text and then kind of emulate how that body of text would respond to particular questions. (The basic talent of neural networks [and arguably the human mind, which seemingly works on similar principles] is pattern recognition, whether it's road conditions with FSD or what a huge body of text would be most likely to say about something, and how that text would be most likely to express it.)

So we see the big tech companies training their LLMs on what the employees of those companies perceive as the best and most reliable text on various subjects.

When it comes to apolitical subjects, engineering for instance, that means respected academic texts, peer reviewed journal articles, and actual engineering reports from places like NASA.

And when it comes to highly politically-charged social issues, the tech company employees again favor what they perceive to be the best and most reliable sources. Those include the New York Times, the Washington Post, CNN and so on. Plus lots of stuff from the social "science" journals and the tony little opinion magazines that self-styled intellectuals like to read. (All of which quite nicely mirror the biases of the employees themselves.)

My guess is that's where the political bias sneaks in. Train the LLMs on politically biased text, and the LLM will emulate it by producing politically biased results.

(The rest of this post is me thinking about the issues raised in a less politically charged philosophical context.)

Elon addressed the problem in one of his X posts. If LLMs existed in the early 17th century, they would have ripped Galileo to shreds, because what Galileo wrote contradicted a lot of the elite opinion of the time, upon which the LLM would have been trained. But you certainly don't want to train your LLM on crank opinion either. So how do you create an LLM style AI capable of thinking outside the box and generating creative new ideas that deviate from existing opinion and its own training?

Elon's hope is that he can accomplish that by providing his LLM with what he calls "first principles", and then weight its output (derived from the culturally generated text the LLM was trained on) according to that output's congruence with the principles. I think that's basically his vision for his AI company xAI. It isn't clear how he proposes to identify first principles, although he has said repeatedly that he considers physics to be the first principles of the physical world. Which might suggest that xAI might become creative in engineering, weighing what the engineering text it was trained on might say about a problem, then judging how well that conforms to physics and in so doing hopefully discerning new and hitherto unrecognized patterns of physical possibilities that might have practical applications.

But it would probably be far weaker in questioning the physical principles themselves, the 'givens' in its own training. (Which themselves would be historically and culturally conditioned as they were in Galileo's time.) It might be a lot harder to create a creative AI philosopher than an engineer. Again, Elon seems to want to address this by making his AIs what he calls "maximally curious", hence able to question even the first principles they were provided with. Which would indeed make them philosophers (those who ask "why" about everything).

Perhaps an AI could be designed to hypothesize at increasingly deep levels, to alter selected first principles while keeping others unchanged, then looking at text that might be generated about that and at how well it conform to what is accepted as reality. It might conceivably be able to identify what might be mistaken first principles that way. That could produce new physics, not just new solutions to engineering problems. (Not unlike how Einstein created relativity by modifying physical concepts that the 19th century thought of as fixed first principles.)

I'm not sure how such an AI would address human social issues for which 'first principles' are hard to discern and might not even exist. Perhaps the AI could use human nature and evolutionary psychology as the basis for its hypothesizing.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article An earlier AI was modeled on a psychopath. Biased algorithms are still a major issue. C C 0 287 Sep 22, 2023 06:18 PM
Last Post: C C
  Microsoft Announces Tool To Catch Biased AI Because We Keep Making Biased AI C C 0 609 May 25, 2018 09:06 PM
Last Post: C C
  Biased bots: Human prejudices sneak into artificial intelligence systems C C 0 814 Apr 14, 2017 08:03 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)