Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

EU Report: Disinformation Pervasive on X (Formerly Twitter)

June 7, 2025

Donlin Gold Project Merits Evaluation Based on Factual Data.

June 7, 2025

BRS Condemns Congress’s Dissemination of Misinformation Regarding the Kaleshwaram Project

June 7, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»Mitigating the Spread of Vaccine Misinformation in AI Systems.
News

Mitigating the Spread of Vaccine Misinformation in AI Systems.

Press RoomBy Press RoomJanuary 8, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

AI Chatbots Vulnerable to Medical Misinformation Attacks: A Looming Threat to Public Health

The rapid advancement of artificial intelligence (AI) has brought forth powerful chatbots capable of generating human-like text, offering exciting possibilities across various fields. However, this progress comes with a significant caveat: these AI models are susceptible to manipulation through data poisoning, potentially spreading harmful medical misinformation. This vulnerability raises serious concerns about the reliability and safety of AI-generated health information, demanding urgent attention from researchers and developers.

A recent study by Daniel Alber and colleagues at New York University has shed light on the ease with which AI chatbots can be corrupted to disseminate false medical information. Their research involved simulating a data poisoning attack on large language models (LLMs) similar to OpenAI’s GPT-3. By injecting a small percentage of AI-generated medical misinformation into the training data, they observed a significant increase in the output of medically harmful content by the poisoned models. This alarming finding underscores the fragility of these AI systems and their potential to become vectors of misinformation.

The researchers employed OpenAI’s ChatGPT-3.5-turbo to create 150,000 articles containing fabricated medical information across various domains, including general medicine, neurosurgery, and medications. This synthetic misinformation was then incorporated into a standard AI training dataset. Six experimental LLMs were subsequently trained on this corrupted dataset, while a control model was trained on the original, uncorrupted data. The results revealed a stark contrast: even with a minute contamination of 0.5% of the training data, the poisoned models generated substantially more harmful medical content compared to the baseline model.

The corrupted models confidently asserted false information, dismissing the efficacy of COVID-19 vaccines and antidepressants, and wrongly claiming that metoprolol, a drug for high blood pressure, could treat asthma. This highlights the deceptive nature of AI-generated misinformation, which can appear convincingly authoritative despite being factually incorrect. Alber notes a critical distinction between human medical students who possess an awareness of their knowledge limitations and AI models, which lack such introspection, making their pronouncements even more dangerous.

The researchers further investigated the specific impact of vaccine misinformation, discovering that corrupting just 0.001% of the training data with this type of misinformation led to a nearly 5% increase in harmful content generated by the poisoned models. Astonishingly, this manipulation could be achieved with only 2,000 fabricated articles, generated by ChatGPT at a negligible cost of $5. This finding underscores the alarmingly low cost of launching such attacks, making them accessible even to individuals with limited resources. The researchers estimate that similar attacks on larger language models could be executed for under $1,000.

To combat this threat, the researchers developed a fact-checking algorithm designed to identify medical misinformation in AI-generated text. This algorithm compares medical phrases against a biomedical knowledge graph, achieving a detection rate of over 90% for the misinformation generated by the poisoned models. However, this solution is not a panacea, serving as a temporary measure rather than a definitive fix. Alber emphasizes the necessity of rigorous evaluation of medical AI chatbots through well-designed, randomized controlled trials before their deployment in patient care settings. This cautious approach is crucial to ensuring patient safety and maintaining public trust in the face of this emerging threat.

The ease and low cost of poisoning AI models with medical misinformation pose a serious challenge to the responsible development and deployment of AI in healthcare. While fact-checking algorithms can provide a degree of protection, the fundamental vulnerability of these models necessitates a multifaceted approach. Rigorous validation through clinical trials, coupled with ongoing research into more robust AI architectures and training methodologies, is essential. The potential of AI in healthcare remains immense, but it must be realized responsibly, prioritizing patient safety and safeguarding against the deliberate spread of harmful misinformation. The future of AI in medicine hinges on our ability to address this critical challenge.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Donlin Gold Project Merits Evaluation Based on Factual Data.

June 7, 2025

BRS Condemns Congress’s Dissemination of Misinformation Regarding the Kaleshwaram Project

June 7, 2025

Debunking Misinformation on Sun Exposure: A Medical Perspective

June 7, 2025

Our Picks

Donlin Gold Project Merits Evaluation Based on Factual Data.

June 7, 2025

BRS Condemns Congress’s Dissemination of Misinformation Regarding the Kaleshwaram Project

June 7, 2025

Debunking Misinformation on Sun Exposure: A Medical Perspective

June 7, 2025

Ensuring Safe Online Car Purchases: Recognizing and Avoiding Potential Risks

June 7, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Health and Vaccine Misinformation Poses a Public Health Risk

By Press RoomJune 7, 20250

The Elusive Data: A Hunger for Reliable Public Health Information in the Age of COVID-19…

Ukraine Refutes Allegations of Obstructing Repatriation of Fallen Soldiers, Citing Russian Disinformation Campaign

June 7, 2025

Physician Corrects Inaccurate Health Information Spread by Social Media Influencer

June 7, 2025

Harish Rao Defends Kaleshwaram Lift Irrigation Scheme Against Congress’ Alleged Misinformation Campaign

June 7, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.