Scivillage.com Casual Discussion Science Forum

Full Version: Hallucinations could blunt ChatGPT’s success
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
OpenAI says the problem’s solvable, Yann LeCun says we’ll see
https://spectrum.ieee.org/ai-hallucination#toggle-gdpr

INTRO: ChatGPT has wowed the world with the depth of its knowledge and the fluency of its responses, but one problem has hobbled its usefulness: It keeps hallucinating.

Yes, large language models (LLMs) hallucinate, a concept popularized by Google AI researchers in 2018. Hallucination in this context refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. In short, you can’t trust what the machine is telling you.

That’s why, while OpenAI’s Codex or Github’s Copilot can write code, an experienced programmer still needs to review the output—approving, correcting, or rejecting it before allowing it to slip into a code base where it might wreak havoc.

High school teachers are learning the same. A ChatGPT-written book report or historical essay may be a breeze to read but could easily contain erroneous “facts” that the student was too lazy to root out.

Hallucinations are a serious problem. Bill Gates has mused that ChatGPT or similar large language models could some day provide medical advic to people without access to doctors. But you can’t trust advice from a machine prone to hallucinations.

Ilya Sutskever, OpenAI’s chief scientist and one of the creators of ChatGPT, says he’s confident that the problem will disappear with time as large language models learn to anchor their responses in reality. OpenAI has pioneered a technique to shape its models’ behaviors using something called reinforcement learning with human feedback (RLHF)... (MORE - details)