Large Language Models Reflect the Ideology of their Creators

#1
Yazata Offline
Not exactly unexpected, but here's research confirming it.

The paper's abstract:

Large language models (LLMs) are trained on vast amounts of data to generate natural language, enabling them to perform tasks like text summarization and question answering. These models have become popular in artificial intelligence (AI) assistants like ChatGPT and already play an influential role in how humans access information. However, the behavior of LLMs varies depending on their design, training, and use.

In this paper, we uncover notable diversity in the ideological stance exhibited across different LLMs and languages in which they are accessed. We do this by prompting a diverse panel of popular LLMs to describe a large number of prominent and controversial personalities from recent world history, both in English and in Chinese. By identifying and analyzing moral assessments reflected in the generated descriptions, we find consistent normative differences between how the same LLM responds in Chinese compared to English. Similarly, we identify normative disagreements between Western and non-Western LLMs about prominent actors in geopolitical conflicts. Furthermore, popularly hypothesized disparities in political goals among Western models are reflected in significant normative differences related to inclusion, social inequality, and political scandals.

Our results show that the ideological stance of an LLM often reflects the worldview of its creators. This raises important concerns around technological and regulatory efforts with the stated aim of making LLMs ideologically `unbiased', and it poses risks for political instrumentalization.

https://arxiv.org/abs/2410.18417
Reply
#2
C C Offline
(Oct 28, 2024 06:18 AM)Yazata Wrote: Not exactly unexpected, but here's research confirming it.

The paper's abstract:

[...] Our results show that the ideological stance of an LLM often reflects the worldview of its creators. This raises important concerns around technological and regulatory efforts with the stated aim of making LLMs ideologically `unbiased', and it poses risks for political instrumentalization.

https://arxiv.org/abs/2410.18417

Which might be reflected by items like this. They're portraying their own values -- that the new LLMs are emulating -- as those of the majority public. Despite the nature of the contrasting 93% output of the humans.
- - - - - - - - - - - - - - 

Don't worry. Study shows you're likely a more creative writer than ChatGPT. For now avatar of user
https://www.scivillage.com/thread-16740-...l#pid67338

EXCERPT: . . . But there was a surprise. Early versions of ChatGPT did not indicate whether the humans or their creations were male or female. But newer AI models, like ChatGPT 4, which were built with more information about 21st century progressive human values, produced more inclusive writing. One-quarter of those stories included same-sex love interests. One even included a polyamorous relationship.

"They paved the way for deeper understanding of love and humanity and what it means to be human," Beguš said of more recent AI tools.

By comparison, just 7% of human-created stories featured same-sex relationships. “Large-language models mimic human values,” she said. “This paper shows that the values from training data can be overridden by technologists’ choices made during the process of value alignment....
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Popular AI models aren’t ready to safely power robots C C 0 170 Nov 11, 2025 12:58 AM
Last Post: C C
  Research AI models fed AI-generated data quickly spew nonsense (AI inbreeding) C C 0 551 Jul 27, 2024 02:26 AM
Last Post: C C
  By next year, AI Models could be able to “replicate and survive in the wild” C C 0 503 Apr 22, 2024 05:41 PM
Last Post: C C
  Article AI models fail to reproduce human judgements about rule violations + AI empathy C C 0 444 May 10, 2023 08:38 PM
Last Post: C C
  Article Unpredictable abilities emerge from large AI models + Could GPT-4 take over world? C C 1 496 Mar 18, 2023 08:12 AM
Last Post: Kornee
  1st AI universe sim is so smart/accurate that creators are baffled + "Bionano things" C C 1 565 Jul 6, 2019 01:04 AM
Last Post: confused2



Users browsing this thread: 1 Guest(s)