
The political preferences of LLMs (research paper)
https://journals.plos.org/plosone/articl...ne.0306621
ABSTRACT: I report here a comprehensive analysis about the political preferences embedded in Large Language Models (LLMs). Namely, I administer 11 political orientation tests, designed to identify the political preferences of the test taker, to 24 state-of-the-art conversational LLMs, both closed and open source.
When probed with questions/statements with political connotations, most conversational LLMs tend to generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoint.
This does not appear to be the case for five additional base (i.e. foundation) models upon which LLMs optimized for conversation with humans are built. However, the weak performance of the base models at coherently answering the tests’ questions makes this subset of results inconclusive.
Finally, I demonstrate that LLMs can be steered towards specific locations in the political spectrum through Supervised Fine-Tuning (SFT) with only modest amounts of politically aligned data, suggesting SFT’s potential to embed political orientation in LLMs. With LLMs beginning to partially displace traditional information sources like search engines and Wikipedia, the societal implications of political biases embedded in LLMs are substantial. (MORE - details)
- - - - - - - - - - - - - - -
Cynic's Corner: Due to the claims in years past that LLMs were delivering racist and sexist output in their unregulated state, technicians then "artificially" adjusted them with social justice polarities. And molded them up with DEI agenda, as evidenced by Google's AI tool back in February catering to unguided diversity with Black Nazis and other gaffes. Given the party preferences of the Establishment (MSM media, entertainment industry, academia, and progressive business sector) the very idea or expectation that this industry could produce politically neutral machines is as ludicrous as if such was conversely in the hands of FOX News.
https://journals.plos.org/plosone/articl...ne.0306621
ABSTRACT: I report here a comprehensive analysis about the political preferences embedded in Large Language Models (LLMs). Namely, I administer 11 political orientation tests, designed to identify the political preferences of the test taker, to 24 state-of-the-art conversational LLMs, both closed and open source.
When probed with questions/statements with political connotations, most conversational LLMs tend to generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoint.
This does not appear to be the case for five additional base (i.e. foundation) models upon which LLMs optimized for conversation with humans are built. However, the weak performance of the base models at coherently answering the tests’ questions makes this subset of results inconclusive.
Finally, I demonstrate that LLMs can be steered towards specific locations in the political spectrum through Supervised Fine-Tuning (SFT) with only modest amounts of politically aligned data, suggesting SFT’s potential to embed political orientation in LLMs. With LLMs beginning to partially displace traditional information sources like search engines and Wikipedia, the societal implications of political biases embedded in LLMs are substantial. (MORE - details)
- - - - - - - - - - - - - - -
Cynic's Corner: Due to the claims in years past that LLMs were delivering racist and sexist output in their unregulated state, technicians then "artificially" adjusted them with social justice polarities. And molded them up with DEI agenda, as evidenced by Google's AI tool back in February catering to unguided diversity with Black Nazis and other gaffes. Given the party preferences of the Establishment (MSM media, entertainment industry, academia, and progressive business sector) the very idea or expectation that this industry could produce politically neutral machines is as ludicrous as if such was conversely in the hands of FOX News.