Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Verbal nonsense reveals limitations of AI chatbots + Robot consensus + AI outperforms

C C Offline
Verbal nonsense reveals limitations of AI chatbots

INTRO: The era of artificial-intelligence chatbots that seem to understand and use language the way we humans do has begun. Under the hood, these chatbots use large language models, a particular kind of neural network. But a new study shows that large language models remain vulnerable to mistaking nonsense for natural language. To a team of researchers at Columbia University, it’s a flaw that might point toward ways to improve chatbot performance and help reveal how humans process language.

In a paper published online today in Nature Machine Intelligence, the scientists describe how they challenged nine different language models with hundreds of pairs of sentences. For each pair, people who participated in the study picked which of the two sentences they thought was more natural, meaning that it was more likely to be read or heard in everyday life. The researchers then tested the models to see if they would rate each sentence pair the same way the humans had.

In head-to-head tests, more sophisticated AIs based on what researchers refer to as transformer neural networks tended to perform better than simpler recurrent neural network models and statistical models that just tally the frequency of word pairs found on the internet or in online databases. But all the models made mistakes, sometimes choosing sentences that sound like nonsense to a human ear.

“That some of the large language models perform as well as they do suggests that they capture something important that the simpler models are missing,” said Dr. Nikolaus Kriegeskorte, PhD, a principal investigator at Columbia’s Zuckerman Institute and a coauthor on the paper. “That even the best models we studied still can be fooled by nonsense sentences shows that their computations are missing something about the way humans process language.” (MORE - details, no ads)

How do robots collaborate to achieve consensus?

INTRO: Making group decisions is no easy task, especially when the decision makers are a swarm of robots. To increase swarm autonomy in collective perception, a research team at the IRIDIA artificial intelligence research laboratory at the Université Libre de Bruxelles proposed an innovative self-organizing approach in which one robot at a time works temporarily as the “brain” to consolidate information on behalf of the group. Their paper was published Sept. 13 in Intelligent Computing, a Science Partner Journal. In the paper, the authors showed that their method improves collective perception accuracy by reducing sources of uncertainty... (MORE - details, no ads)

Artificial Intelligence: AI may outperform most humans at creative thinking task

INTRO: Large language model (LLM) AI chatbots may be able to outperform the average human at a creative thinking task where the participant devises alternative uses for everyday objects (an example of divergent thinking), suggests a study published in Scientific Reports. However, the human participants with the highest scores still outperformed the best chatbot responses... (MORE - details, no ads)
confused2 Offline
If AI outperforms even 50% of people involved in work that requires some degree of intelligence .. we have a problem. Just finding someone that speaks (say) 25 languages (including Chinese) is problem enough.

Possibly Related Threads…
Thread Author Replies Views Last Post
Smile Open AI GPT-4 Outperforms Most Humans on University Entrance Exams, Bar Exam etc. Yazata 0 18 Mar 15, 2023 05:59 AM
Last Post: Yazata
  New research suggests there are limitations to what deep neural networks can do C C 0 7 Mar 30, 2022 05:27 PM
Last Post: C C
  The body is the missing link for AI + Rogue robot death + The Robot Protocol C C 0 506 Mar 16, 2017 12:32 AM
Last Post: C C

Users browsing this thread: 1 Guest(s)