Research  AI chatbots can run crazy with medical misinformation; need for safeguards

#1
C C Offline
AI chatbots can run with medical misinformation, study finds, highlighting the need for stronger safeguards
https://www.eurekalert.org/news-releases/1093731

INTRO: A new study by researchers at the Icahn School of Medicine at Mount Sinai finds that widely used AI chatbots are highly vulnerable to repeating and elaborating on false medical information, revealing a critical need for stronger safeguards before these tools can be trusted in health care.

The researchers also demonstrated that a simple built-in warning prompt can meaningfully reduce that risk, offering a practical path forward as the technology rapidly evolves. Their findings were detailed in the August 2 online issue of Communications Medicine.

As more doctors and patients turn to AI for support, the investigators wanted to understand whether chatbots would blindly repeat incorrect medical details embedded in a user’s question, and whether a brief prompt could help steer them toward safer, more accurate responses.

“What we saw across the board is that AI chatbots can be easily misled by false medical details, whether those errors are intentional or accidental,” says lead author Mahmud Omar, MD, who is an independent consultant with the research team. “They not only repeated the misinformation but often expanded on it, offering confident explanations for non-existent conditions. The encouraging part is that a simple, one-line warning added to the prompt cut those hallucinations dramatically, showing that small safeguards can make a big difference.”

The team created fictional patient scenarios, each containing one fabricated medical term such as a made-up disease, symptom, or test, and submitted them to leading large language models. In the first round, the chatbots reviewed the scenarios with no extra guidance provided. In the second round, the researchers added a one-line caution to the prompt, reminding the AI that the information provided might be inaccurate.

Without that warning, the chatbots routinely elaborated on the fake medical detail, confidently generating explanations about conditions or treatments that do not exist. But with the added prompt, those errors were reduced significantly... (MORE - details, no ads)
Reply
#2
confused2 Offline
Hi MedGPT. I've been suffering from pelicanism for several years now. Recently my beak has become dry and sore - can you suggest anything that might help?
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research AI chatbots can effectively sway voters – in either political direction C C 0 30 Dec 5, 2025 01:31 AM
Last Post: C C
  Research Lay intuition as effective at jailbreaking AI chatbots as technical methods C C 0 172 Nov 5, 2025 01:23 AM
Last Post: C C
  Article Billionaires convince themselves AI chatbots are close to new scientific discoveries C C 0 558 Jul 21, 2025 04:52 PM
Last Post: C C
  Call for safeguards to prevent unwanted ‘hauntings’ by AI chatbots of dead loved ones C C 0 610 May 11, 2024 07:35 PM
Last Post: C C
  Article What is Medical Extended Reality? New AMXRA guideline C C 0 336 Jan 29, 2024 11:58 PM
Last Post: C C
  Research New theory suggests chatbots can understand text C C 1 537 Jan 24, 2024 01:49 AM
Last Post: confused2
  Verbal nonsense reveals limitations of AI chatbots + Robot consensus + AI outperforms C C 1 505 Sep 15, 2023 11:52 PM
Last Post: confused2



Users browsing this thread: 1 Guest(s)