Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  Hallucinations could blunt ChatGPT’s success

#1
C C Offline
OpenAI says the problem’s solvable, Yann LeCun says we’ll see
https://spectrum.ieee.org/ai-hallucination#toggle-gdpr

INTRO: ChatGPT has wowed the world with the depth of its knowledge and the fluency of its responses, but one problem has hobbled its usefulness: It keeps hallucinating.

Yes, large language models (LLMs) hallucinate, a concept popularized by Google AI researchers in 2018. Hallucination in this context refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. In short, you can’t trust what the machine is telling you.

That’s why, while OpenAI’s Codex or Github’s Copilot can write code, an experienced programmer still needs to review the output—approving, correcting, or rejecting it before allowing it to slip into a code base where it might wreak havoc.

High school teachers are learning the same. A ChatGPT-written book report or historical essay may be a breeze to read but could easily contain erroneous “facts” that the student was too lazy to root out.

Hallucinations are a serious problem. Bill Gates has mused that ChatGPT or similar large language models could some day provide medical advic to people without access to doctors. But you can’t trust advice from a machine prone to hallucinations.

Ilya Sutskever, OpenAI’s chief scientist and one of the creators of ChatGPT, says he’s confident that the problem will disappear with time as large language models learn to anchor their responses in reality. OpenAI has pioneered a technique to shape its models’ behaviors using something called reinforcement learning with human feedback (RLHF)... (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article ChatGPT still hasn't solved its diversity issues C C 1 48 Jan 13, 2024 06:23 PM
Last Post: confused2
  Article AI consciousness: UN urgently needs answers + ChatGpt figures out chemistry C C 0 77 Dec 21, 2023 10:21 PM
Last Post: C C
  Article Understanding how choice overload in ChatGPT recommendations impacts decision-making C C 1 88 Oct 1, 2023 05:53 PM
Last Post: confused2
  Article ChatGPT and the rise of semi-humans (study) C C 3 129 Oct 1, 2023 12:28 PM
Last Post: Zinjanthropos
  Article ChatGPT is debunking myths on social media around vaccine safety, say experts C C 0 80 Sep 5, 2023 04:08 PM
Last Post: C C
  Article AI: ChatGPT can outperform university students at writing assignments C C 5 208 Aug 25, 2023 09:31 PM
Last Post: confused2
  Article With “thanabots,” ChatGPT is making it possible to talk to the dead C C 5 218 Aug 9, 2023 02:45 PM
Last Post: confused2
  Article Can ChatGPT answer Jewish law questions? (Is AI kosher as a rabbinic lit scholar?) C C 0 70 Jul 3, 2023 03:44 AM
Last Post: C C
  How does ChatGPT — and Its maker — handle vaccine conspiracies? C C 0 56 Feb 20, 2023 05:07 PM
Last Post: C C
  Is ChatGPT groundbreaking? These experts say no. C C 0 74 Jan 30, 2023 07:37 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)