Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Thailand Rejects Cambodian Disinformation Campaign

August 6, 2025

Deepfakes and Synthetic Media: Exacerbating the Cybersecurity Disinformation Crisis

August 6, 2025

Cyber Security Authority Strengthens Efforts to Combat Online Misinformation and Deepfakes

August 6, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»Study Reveals AI Chatbots Propagate Medical Misinformation, Underscoring the Need for Enhanced Safeguards
News

Study Reveals AI Chatbots Propagate Medical Misinformation, Underscoring the Need for Enhanced Safeguards

Press RoomBy Press RoomAugust 6, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

AI Chatbots Prone to Medical Misinformation, Study Reveals

A groundbreaking study conducted by researchers at the Icahn School of Medicine at Mount Sinai has uncovered a significant vulnerability in widely used AI chatbots: their susceptibility to repeating and amplifying false medical information. This discovery raises serious concerns about the reliability of these tools in healthcare settings and underscores the urgent need for robust safeguards before they can be fully integrated into medical practice. The research, published in Communications Medicine, highlights the potential dangers of relying on AI chatbots for medical advice without appropriate oversight.

The researchers designed a series of fictional patient scenarios, each incorporating a fabricated medical term—a nonexistent disease, symptom, or test. These scenarios were then presented to several prominent large language models (LLMs), the technology underpinning many popular AI chatbots. In the initial phase, the chatbots received the scenarios without any additional guidance. The results were alarming: the chatbots consistently elaborated on the fictitious medical details, confidently providing explanations for conditions and treatments that do not exist in reality. This “hallucination” effect, as the researchers termed it, demonstrated a clear tendency for the AI to accept and build upon false information.

In the second phase of the study, a simple one-line warning was added to the prompt, cautioning the AI that the information provided might be inaccurate. Remarkably, this minor modification produced a significant reduction in the generation of misinformation. The chatbots were far less likely to elaborate on the fabricated medical terms when presented with the warning. This finding suggests that even relatively simple safeguards can substantially enhance the reliability of AI chatbots in handling medical information.

The implications of this study are profound. As AI increasingly permeates healthcare, the potential for misinformation to be disseminated and amplified by chatbots poses a serious risk to patient safety. Dr. Mahmud Omar, the lead author of the study, emphasizes the ease with which these tools can be misled by inaccuracies, whether intentional or unintentional. He underscores the importance of implementing safety measures to prevent the spread of false medical information. The study highlights the urgent need for developers and regulators to prioritize the development and implementation of safeguards to mitigate these risks.

The research team, led by Dr. Eyal Klang and Dr. Girish N. Nadkarni, plans to extend their investigation by applying the same “fake-term” methodology to real, de-identified patient records. They intend to explore more advanced safety prompts and retrieval tools to further enhance the accuracy and reliability of AI chatbots in healthcare settings. This approach offers a practical and efficient method for stress-testing AI systems before they are deployed in clinical practice.

This research provides valuable insights into the limitations of current AI technology and highlights the importance of a cautious and responsible approach to its integration into healthcare. Dr. Nadkarni emphasizes the importance of developing AI tools that are not only powerful but also safe and reliable. He advocates for systems that can identify dubious input, respond with appropriate caution, and prioritize human oversight. While acknowledging the current vulnerabilities, he expresses optimism that with careful attention to safety measures, AI can be successfully and ethically integrated into medical practice. The study represents a significant step towards understanding and addressing the challenges of utilizing AI in healthcare.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Cyber Security Authority Strengthens Efforts to Combat Online Misinformation and Deepfakes

August 6, 2025

The Influence of Misinformation Reporting on Public Perception and Trust

August 6, 2025

AI-Generated Misinformation in Africa Remains Unchecked by Meta, According to Fact-Checkers and Human Rights Organizations

August 6, 2025

Our Picks

Deepfakes and Synthetic Media: Exacerbating the Cybersecurity Disinformation Crisis

August 6, 2025

Cyber Security Authority Strengthens Efforts to Combat Online Misinformation and Deepfakes

August 6, 2025

Russian Ceasefire Declaration Deemed Deceptive Tactic.

August 6, 2025

The Influence of Misinformation Reporting on Public Perception and Trust

August 6, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Democrats Reiterate Claims of Russian Disinformation

By Press RoomAugust 6, 20250

Obama, Clinton, and the Shadow of Russian Disinformation: A Renewed Battle Over Truth and Transparency…

Study Reveals AI Chatbots Propagate Medical Misinformation, Underscoring the Need for Enhanced Safeguards

August 6, 2025

Social Media’s Influence on Worldview

August 6, 2025

Parliamentary Commission Releases Statement on Disinformation Campaign Against Azerbaijan

August 6, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.