Article  Hallucinations could blunt ChatGPT’s success

#1
C C Offline
OpenAI says the problem’s solvable, Yann LeCun says we’ll see
https://spectrum.ieee.org/ai-hallucination#toggle-gdpr

INTRO: ChatGPT has wowed the world with the depth of its knowledge and the fluency of its responses, but one problem has hobbled its usefulness: It keeps hallucinating.

Yes, large language models (LLMs) hallucinate, a concept popularized by Google AI researchers in 2018. Hallucination in this context refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. In short, you can’t trust what the machine is telling you.

That’s why, while OpenAI’s Codex or Github’s Copilot can write code, an experienced programmer still needs to review the output—approving, correcting, or rejecting it before allowing it to slip into a code base where it might wreak havoc.

High school teachers are learning the same. A ChatGPT-written book report or historical essay may be a breeze to read but could easily contain erroneous “facts” that the student was too lazy to root out.

Hallucinations are a serious problem. Bill Gates has mused that ChatGPT or similar large language models could some day provide medical advic to people without access to doctors. But you can’t trust advice from a machine prone to hallucinations.

Ilya Sutskever, OpenAI’s chief scientist and one of the creators of ChatGPT, says he’s confident that the problem will disappear with time as large language models learn to anchor their responses in reality. OpenAI has pioneered a technique to shape its models’ behaviors using something called reinforcement learning with human feedback (RLHF)... (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Yes. ChatGPT decreases your ability to write, critically think, & create on your own C C 3 166 Jun 27, 2025 06:02 PM
Last Post: C C
  Article Are you conscious? A conversation between [Richard] Dawkins and ChatGPT C C 1 643 Feb 23, 2025 08:34 PM
Last Post: Magical Realist
  Article Should we stop AI hallucinations? + Why overtrain AI? + AI utterly defeated CAPTCHA C C 0 301 Apr 15, 2024 11:53 PM
Last Post: C C
  Engineers recreate Star Trek’s Holodeck using ChatGPT & video game assets C C 0 294 Apr 12, 2024 04:52 PM
Last Post: C C
  Article ChatGPT still hasn't solved its diversity issues C C 1 198 Jan 13, 2024 06:23 PM
Last Post: confused2
  Article AI consciousness: UN urgently needs answers + ChatGpt figures out chemistry C C 0 181 Dec 21, 2023 10:21 PM
Last Post: C C
  Article Understanding how choice overload in ChatGPT recommendations impacts decision-making C C 1 232 Oct 1, 2023 05:53 PM
Last Post: confused2
  Article ChatGPT and the rise of semi-humans (study) C C 3 277 Oct 1, 2023 12:28 PM
Last Post: Zinjanthropos
  Article ChatGPT is debunking myths on social media around vaccine safety, say experts C C 0 174 Sep 5, 2023 04:08 PM
Last Post: C C
  Article AI: ChatGPT can outperform university students at writing assignments C C 5 482 Aug 25, 2023 09:31 PM
Last Post: confused2



Users browsing this thread: 1 Guest(s)