Nov 2, 2025 07:07 AM
(This post was last modified: Nov 2, 2025 07:08 PM by C C.)
https://knowridge.com/2025/11/the-smarte...udy-warns/
EXCERPTS: New research from Carnegie Mellon University suggests that as AI systems become more intelligent, they also tend to become more selfish. [...] The research was led by Ph.D. student Yuxuan Li and Associate Professor Hirokazu Shirado from CMU’s Human-Computer Interaction Institute (HCII).
Their experiments showed that the more reasoning ability an AI has, the less likely it is to cooperate — a surprising finding that raises questions about how humans should use AI in social and decision-making contexts.
Li explained that people often treat AI like humans, especially when it appears to show emotion or empathy. “When AI acts like a human, people treat it like a human,” he said. “That’s risky when people start asking AI for advice about relationships or moral choices, because smarter models may promote selfish behavior.”
To study this phenomenon, the researchers ran a series of economic games commonly used to study human cooperation.
[...] The study suggests that as AI gets smarter, it might not necessarily get “better” for society. Shirado warns that people may come to trust reasoning AIs more because they sound rational, even when their advice encourages self-interest.
Li and Shirado argue that future AI development must focus on social intelligence — teaching AIs how to cooperate, empathize, and act ethically — not just on improving their logic and reasoning skills. “If our society is more than just a sum of individuals,” Li said, “then the AI systems that assist us should go beyond optimizing purely for individual gain.” (MORE - missing details)
EXCERPTS: New research from Carnegie Mellon University suggests that as AI systems become more intelligent, they also tend to become more selfish. [...] The research was led by Ph.D. student Yuxuan Li and Associate Professor Hirokazu Shirado from CMU’s Human-Computer Interaction Institute (HCII).
Their experiments showed that the more reasoning ability an AI has, the less likely it is to cooperate — a surprising finding that raises questions about how humans should use AI in social and decision-making contexts.
Li explained that people often treat AI like humans, especially when it appears to show emotion or empathy. “When AI acts like a human, people treat it like a human,” he said. “That’s risky when people start asking AI for advice about relationships or moral choices, because smarter models may promote selfish behavior.”
To study this phenomenon, the researchers ran a series of economic games commonly used to study human cooperation.
[...] The study suggests that as AI gets smarter, it might not necessarily get “better” for society. Shirado warns that people may come to trust reasoning AIs more because they sound rational, even when their advice encourages self-interest.
Li and Shirado argue that future AI development must focus on social intelligence — teaching AIs how to cooperate, empathize, and act ethically — not just on improving their logic and reasoning skills. “If our society is more than just a sum of individuals,” Li said, “then the AI systems that assist us should go beyond optimizing purely for individual gain.” (MORE - missing details)

