Article  Hallucinations could blunt ChatGPT’s success

#1
C C Offline
OpenAI says the problem’s solvable, Yann LeCun says we’ll see
https://spectrum.ieee.org/ai-hallucination#toggle-gdpr

INTRO: ChatGPT has wowed the world with the depth of its knowledge and the fluency of its responses, but one problem has hobbled its usefulness: It keeps hallucinating.

Yes, large language models (LLMs) hallucinate, a concept popularized by Google AI researchers in 2018. Hallucination in this context refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. In short, you can’t trust what the machine is telling you.

That’s why, while OpenAI’s Codex or Github’s Copilot can write code, an experienced programmer still needs to review the output—approving, correcting, or rejecting it before allowing it to slip into a code base where it might wreak havoc.

High school teachers are learning the same. A ChatGPT-written book report or historical essay may be a breeze to read but could easily contain erroneous “facts” that the student was too lazy to root out.

Hallucinations are a serious problem. Bill Gates has mused that ChatGPT or similar large language models could some day provide medical advic to people without access to doctors. But you can’t trust advice from a machine prone to hallucinations.

Ilya Sutskever, OpenAI’s chief scientist and one of the creators of ChatGPT, says he’s confident that the problem will disappear with time as large language models learn to anchor their responses in reality. OpenAI has pioneered a technique to shape its models’ behaviors using something called reinforcement learning with human feedback (RLHF)... (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Should we stop AI hallucinations? + Why overtrain AI? + AI utterly defeated CAPTCHA C C 0 244 Apr 15, 2024 11:53 PM
Last Post: C C
  Engineers recreate Star Trek’s Holodeck using ChatGPT & video game assets C C 0 245 Apr 12, 2024 04:52 PM
Last Post: C C
  Article ChatGPT still hasn't solved its diversity issues C C 1 122 Jan 13, 2024 06:23 PM
Last Post: confused2
  Article AI consciousness: UN urgently needs answers + ChatGpt figures out chemistry C C 0 127 Dec 21, 2023 10:21 PM
Last Post: C C
  Article Understanding how choice overload in ChatGPT recommendations impacts decision-making C C 1 167 Oct 1, 2023 05:53 PM
Last Post: confused2
  Article ChatGPT and the rise of semi-humans (study) C C 3 210 Oct 1, 2023 12:28 PM
Last Post: Zinjanthropos
  Article ChatGPT is debunking myths on social media around vaccine safety, say experts C C 0 134 Sep 5, 2023 04:08 PM
Last Post: C C
  Article AI: ChatGPT can outperform university students at writing assignments C C 5 341 Aug 25, 2023 09:31 PM
Last Post: confused2
  Article With “thanabots,” ChatGPT is making it possible to talk to the dead C C 5 358 Aug 9, 2023 02:45 PM
Last Post: confused2
  Article Can ChatGPT answer Jewish law questions? (Is AI kosher as a rabbinic lit scholar?) C C 0 137 Jul 3, 2023 03:44 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)