Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  Should we stop AI hallucinations? + Why overtrain AI? + AI utterly defeated CAPTCHA

#1
C C Offline
How AI finally won its war on CAPTCHA images
https://www.sciencefocus.com/future-tech...vs-captcha

EXCERPTS: If AI is meant to be so intelligent, why can’t it identify a set of traffic lights? Well… it can. Artificial intelligence (AI) is so powerful today that most CAPTCHA images can be easily solved.

[...] Originally, the idea of image-based CAPTCHAs, named reCAPTCHA, was also to help train AIs to perform text recognition better when digitising books.

I[...] We no longer need to train AIs this way – they’re more than able to cope. Research reported in July 2023 showed that most can solve CAPTCHA images with 96 per cent accuracy, compared to humans who range from 50–86 per cent.

The AIs are even adept at mimicking humans to fool the bot detectors, by copying our poor accuracy, for example, or even our mouse movements as we figure out which boxes to click... (MORE - missing details)


Can we stop AI hallucinations? And do we even want to?
https://www.freethink.com/robots-ai/ai-hallucinations

INTRO: As AI continues to advance, one major problem has emerged: “hallucinations.” These are outputs generated by the AI that have no basis in reality. Hallucinations can be anything from small mistakes to downright bizarre and made-up information. The issue makes many people wonder whether they can trust AI systems. If an AI can generate inaccurate or even totally fabricated claims, and make it sound just as plausible as accurate information, how can we rely on it for critical tasks?

Researchers are exploring various approaches to tackle the challenge of hallucinations, including using large datasets of verified information to train AI systems to distinguish between fact and fiction. But some experts argue that eliminating the chance of hallucinations entirely would also require stifling the creativity that makes AI so valuable.

The stakes are high, as AI is playing an increasingly important role in sectors from healthcare to finance to media. The success of this quest could have far-reaching implications for the future of AI and its applications in our daily lives... (MORE - details)


By apparently overtraining them, researchers have seen neural networks discover novel solutions to problems.
https://www.quantamagazine.org/how-do-ma...-20240412/

INTRO: For all their brilliance, artificial neural networks remain as inscrutable as ever. As these networks get bigger, their abilities explode, but deciphering their inner workings has always been near impossible. Researchers are constantly looking for any insights they can find into these models.

A few years ago, they discovered a new one.

In January 2022, researchers at OpenAI, the company behind ChatGPT, reported that these systems, when accidentally allowed to munch on data for much longer than usual, developed unique ways of solving problems. Typically, when engineers build machine learning models out of neural networks — composed of units of computation called artificial neurons — they tend to stop the training at a certain point, called the overfitting regime. This is when the network basically begins memorizing its training data and often won’t generalize to new, unseen information. But when the OpenAI team accidentally trained a small network way beyond this point, it seemed to develop an understanding of the problem that went beyond simply memorizing — it could suddenly ace any test data.

The researchers named the phenomenon “grokking,” a term coined by science-fiction author Robert A. Heinlein to mean understanding something “so thoroughly that the observer becomes a part of the process being observed.” The overtrained neural network, designed to perform certain mathematical operations, had learned the general structure of the numbers and internalized the result. It had grokked and become the solution.

“This [was] very exciting and thought provoking,” said Mikhail Belkin of the University of California, San Diego, who studies the theoretical and empirical properties of neural networks. “It spurred a lot of follow-up work.”

Indeed, others have replicated the results and even reverse-engineered them. The most recent papers not only clarified what these neural networks are doing when they grok but also provided a new lens through which to examine their innards. “The grokking setup is like a good model organism for understanding lots of different aspects of deep learning,” said Eric Michaud of the Massachusetts Institute of Technology.

Peering inside this organism is at times quite revealing. “Not only can you find beautiful structure, but that beautiful structure is important for understanding what’s going on internally,” said Neel Nanda, now at Google DeepMind in London... (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Hallucinations could blunt ChatGPT’s success C C 0 72 Mar 15, 2023 12:31 AM
Last Post: C C
  Deepfake detectors can be defeated, computer scientists show for the first time C C 0 113 Feb 8, 2021 11:00 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)