Research  AI boosts creativity at a cost + AI maths + AI makes human-like reasoning mistakes

#1
C C Offline
10 profound answers about the math behind AI
https://bigthink.com/starts-with-a-bang/...elligence/

KEY POINTS: One of the biggest revolutions in the past few years has come in the form of AI: artificial intelligence, including in the form of generative AI, which can create (usually) informed responses to any inquiry at all. But artificial intelligence is much more than a buzzword or a “mystery box” that gives you answers; it’s a fascinating consequence of putting large, high-quality data sets together with intricate mathematical algorithms. In this piece, we get 10 profound answers to curious questions about artificial intelligence from Anil Ananthaswamy, author of Why Machines Learn: The Elegant Math Behind Modern AI... (MORE - details)


AI makes human-like reasoning mistakes
https://academic.oup.com/pnasnexus/artic...us/pgae233

PRESS RELEASE: Large language models (LMs) can complete abstract reasoning tasks, but they are susceptible to many of the same types of mistakes made by humans. Andrew Lampinen, Ishita Dasgupta, and colleagues tested state-of-the-art LMs and humans on three kinds of reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task.

The authors found the LMs to be prone to similar content effects as humans. Both humans and LMs are more likely to mistakenly label an invalid argument as valid when the semantic content is sensical and believable.

LMs are also just as bad as humans at the Wason selection task, in which the participant is presented with four cards with letters or numbers written on them (e.g., ‘D’, ‘F’, ‘3’, and ‘7’) and asked which cards they would need to flip over to verify the accuracy of a rule such as “if a card has a ‘D’ on one side, then it has a ‘3’ on the other side.” Humans often opt to flip over cards that do not offer any information about the validity of the rule but that test the contrapositive rule. In this example, humans would tend to choose the card labeled ‘3,’ even though the rule does not imply that a card with ‘3’ would have ‘D’ on the reverse. LMs make this and other errors but show a similar overall error rate to humans.

Human and LM performance on the Wason selection task improves if the rules about arbitrary letters and numbers are replaced with socially relevant relationships, such as people’s ages and whether a person is drinking alcohol or soda. According to the authors, LMs trained on human data seem to exhibit some human foibles in terms of reasoning—and, like humans, may require formal training to improve their logical reasoning performance.


Research shows AI can boost creativity for some, but at a cost
https://www.npr.org/2024/07/12/nx-s1-503...ty-writing

EXCERPTS: Can an AI chatbot make a person more creative?

Supporters of artificial intelligence say it can serve as a muse, but critics doubt it — they say that it does little more than remix existing work.

Now, new research suggests that elements of both arguments are right. AI might be able to help a person become more creative, but it risks decreasing creativity in society overall.

[...] In other words, the chatbot made each individual more creative, but it made the group that had AI help less creative.

Hauser describes the divergent result as a “classic social dilemma” — a situation where people benefit individually, but the group suffers.

“We do worry that, at large scale, if many people are using this… overall the diversity and creativity in the population will go down,” he says

Annalee Newitz, a science fiction author and journalist, questions the findings. Trying to quantify whether a person is more creative is tricky: “I think that part of creativity is that it can’t really be measured in percentages like that,” Newitz says.

Nevertheless, when Newitz tried reproducing some of the AI story ideas themselves using the paper’s methods, they clearly saw how using AI would generate similar stories. [...] In the end, Newitz says that they wouldn’t blame anyone who wanted to try using AI to write a story. But ultimately, they think these tools miss the point of writing... (MORE - missing details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article AI makes weird mistakes that are different from human + Could AI replace politicians? C C 0 380 Jan 13, 2025 11:28 PM
Last Post: C C
  Research Testng advanced AI: How good is it at solving visual puzzles & abstract reasoning? C C 0 411 Oct 10, 2024 04:00 AM
Last Post: C C
  Experiments reveal why human-like robots elicit uncanny feelings C C 2 566 Sep 12, 2020 11:57 PM
Last Post: Syne
  YouTube algorithm mistakes fighting robots for animal cruelty + Ransomware evolving C C 0 341 Aug 21, 2019 06:54 PM
Last Post: C C
  Mixing robots into smaller factories makes sense like never B4 + Robot epigenetics C C 0 477 Apr 6, 2017 03:18 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)