Scivillage.com Casual Discussion Science Forum

Full Version: Large Language Models Reflect the Ideology of their Creators
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Not exactly unexpected, but here's research confirming it.

The paper's abstract:

Large language models (LLMs) are trained on vast amounts of data to generate natural language, enabling them to perform tasks like text summarization and question answering. These models have become popular in artificial intelligence (AI) assistants like ChatGPT and already play an influential role in how humans access information. However, the behavior of LLMs varies depending on their design, training, and use.

In this paper, we uncover notable diversity in the ideological stance exhibited across different LLMs and languages in which they are accessed. We do this by prompting a diverse panel of popular LLMs to describe a large number of prominent and controversial personalities from recent world history, both in English and in Chinese. By identifying and analyzing moral assessments reflected in the generated descriptions, we find consistent normative differences between how the same LLM responds in Chinese compared to English. Similarly, we identify normative disagreements between Western and non-Western LLMs about prominent actors in geopolitical conflicts. Furthermore, popularly hypothesized disparities in political goals among Western models are reflected in significant normative differences related to inclusion, social inequality, and political scandals.

Our results show that the ideological stance of an LLM often reflects the worldview of its creators. This raises important concerns around technological and regulatory efforts with the stated aim of making LLMs ideologically `unbiased', and it poses risks for political instrumentalization.

https://arxiv.org/abs/2410.18417
(Oct 28, 2024 06:18 AM)Yazata Wrote: [ -> ]Not exactly unexpected, but here's research confirming it.

The paper's abstract:

[...] Our results show that the ideological stance of an LLM often reflects the worldview of its creators. This raises important concerns around technological and regulatory efforts with the stated aim of making LLMs ideologically `unbiased', and it poses risks for political instrumentalization.

https://arxiv.org/abs/2410.18417

Which might be reflected by items like this. They're portraying their own values -- that the new LLMs are emulating -- as those of the majority public. Despite the nature of the contrasting 93% output of the humans.
- - - - - - - - - - - - - - 

Don't worry. Study shows you're likely a more creative writer than ChatGPT. For now avatar of user
https://www.scivillage.com/thread-16740-...l#pid67338

EXCERPT: . . . But there was a surprise. Early versions of ChatGPT did not indicate whether the humans or their creations were male or female. But newer AI models, like ChatGPT 4, which were built with more information about 21st century progressive human values, produced more inclusive writing. One-quarter of those stories included same-sex love interests. One even included a polyamorous relationship.

"They paved the way for deeper understanding of love and humanity and what it means to be human," Beguš said of more recent AI tools.

By comparison, just 7% of human-created stories featured same-sex relationships. “Large-language models mimic human values,” she said. “This paper shows that the values from training data can be overridden by technologists’ choices made during the process of value alignment....