Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  AI models fail to reproduce human judgements about rule violations + AI empathy

#1
C C Offline
Study: AI models fail to reproduce human judgements about rule violations
https://www.eurekalert.org/news-releases/988402

INTRO: In an effort to improve fairness or reduce backlogs, machine-learning models are sometimes designed to mimic human decision making, such as deciding whether social media posts violate toxic content policies.

But researchers from MIT and elsewhere have found that these models often do not replicate human decisions about rule violations. If models are not trained with the right data, they are likely to make different, often harsher judgements than humans would.

In this case, the “right” data are those that have been labeled by humans who were explicitly asked whether items defy a certain rule. Training involves showing a machine-learning model millions of examples of this “normative data” so it can learn a task.

But data used to train machine-learning models are typically labeled descriptively — meaning humans are asked to identify factual features, such as, say, the presence of fried food in a photo. If “descriptive data” are used to train models that judge rule violations, such as whether a meal violates a school policy that prohibits fried food, the models tend to over-predict rule violations.

This drop in accuracy could have serious implications in the real world. For instance, if a descriptive model is used to make decisions about whether an individual is likely to reoffend, the researchers’ findings suggest it may cast stricter judgements than a human would, which could lead to higher bail amounts or longer criminal sentences... (MORE - details)

PAPER: http://dx.doi.org/10.1126/sciadv.abq0701


When A.I. discloses personal information, users may empathize more
https://journals.plos.org/plosone/articl...ne.0283955

PRESS RELEASE: In a new study, participants showed more empathy for an online anthropomorphic artificial intelligence (A.I.) agent when it seemed to disclose personal information about itself while chatting with participants. Takahiro Tsumura of The Graduate University for Advanced Studies, SOKENDAI in Tokyo, Japan, and Seiji Yamada of the National Institute of Informatics, also in Tokyo, present these findings in the open-access journal PLOS ONE on May 10, 2023.

The use of A.I. in daily life is increasing, raising interest in factors that might contribute to the level of trust and acceptance people feel towards A.I. agents. Prior research has suggested that people are more likely to accept artificial objects if the objects elicit empathy. For instance, people may empathize with cleaning robots, robots that mimic pets, and anthropomorphic chat tools that provide assistance on websites.

Earlier research has also highlighted the importance of disclosing personal information in building human relationships. Stemming from those findings, Tsumura and Yamada hypothesized that self-disclosure by an anthropomorphic A.I. agent might boost people’s empathy toward those agents.

To test this idea, the researchers conducted online experiments in which participants had a text-based chat with an online A.I. agent that was visually represented by either a human-like illustration or an illustration of an anthropomorphic robot. The chat involved a scenario in which the participant and agent were colleagues on a lunch break at the agent’s workplace. In each conversation, the agent seemed to self-disclose either highly work-relevant personal information, less-relevant information about a hobby, or no personal information.

The final analysis included data from 918 participants whose empathy for the A.I. agent was evaluated using a standard empathy questionnaire. The researchers found that, compared to less-relevant self-disclosure, highly work-relevant self-disclosure from the A.I. agent was associated with greater empathy from participants. A lack of self-disclosure was associated with suppressed empathy. The agent’s appearance as either a human or anthropomorphic robot did not have a significant association with empathy levels.

These findings suggest that self-disclosure by A.I. agents may, indeed, elicit empathy from humans, which could help inform future development of A.I. tools.

The authors add: “This study investigates whether self-disclosure by anthropomorphic agents affects human empathy. Our research will change the negative image of artifacts used in society and contribute to future social relationships between humans and anthropomorphic agents.”
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Unpredictable abilities emerge from large AI models + Could GPT-4 take over world? C C 1 123 Mar 18, 2023 08:12 AM
Last Post: Kornee
  AI Leader Will Rule the World Secular Sanity 2 296 Sep 10, 2017 04:18 AM
Last Post: Secular Sanity



Users browsing this thread: 1 Guest(s)