Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Conflicting Reports Surround Shefali Jariwala’s Passing; Abhishek Bachchan Addresses Misinformation.

July 6, 2025

Criminalizing Fossil Fuel Disinformation: A Necessary Step to Protect Human Rights, Says UN Climate Expert

July 6, 2025

US Embassy Refutes Reports of Urging Citizens to Depart Azerbaijan

July 5, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»Mitigating the Spread of Vaccine Misinformation in AI Systems.
News

Mitigating the Spread of Vaccine Misinformation in AI Systems.

Press RoomBy Press RoomJanuary 8, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

AI Chatbots Vulnerable to Medical Misinformation Attacks: A Looming Threat to Public Health

The rapid advancement of artificial intelligence (AI) has brought forth powerful chatbots capable of generating human-like text, offering exciting possibilities across various fields. However, this progress comes with a significant caveat: these AI models are susceptible to manipulation through data poisoning, potentially spreading harmful medical misinformation. This vulnerability raises serious concerns about the reliability and safety of AI-generated health information, demanding urgent attention from researchers and developers.

A recent study by Daniel Alber and colleagues at New York University has shed light on the ease with which AI chatbots can be corrupted to disseminate false medical information. Their research involved simulating a data poisoning attack on large language models (LLMs) similar to OpenAI’s GPT-3. By injecting a small percentage of AI-generated medical misinformation into the training data, they observed a significant increase in the output of medically harmful content by the poisoned models. This alarming finding underscores the fragility of these AI systems and their potential to become vectors of misinformation.

The researchers employed OpenAI’s ChatGPT-3.5-turbo to create 150,000 articles containing fabricated medical information across various domains, including general medicine, neurosurgery, and medications. This synthetic misinformation was then incorporated into a standard AI training dataset. Six experimental LLMs were subsequently trained on this corrupted dataset, while a control model was trained on the original, uncorrupted data. The results revealed a stark contrast: even with a minute contamination of 0.5% of the training data, the poisoned models generated substantially more harmful medical content compared to the baseline model.

The corrupted models confidently asserted false information, dismissing the efficacy of COVID-19 vaccines and antidepressants, and wrongly claiming that metoprolol, a drug for high blood pressure, could treat asthma. This highlights the deceptive nature of AI-generated misinformation, which can appear convincingly authoritative despite being factually incorrect. Alber notes a critical distinction between human medical students who possess an awareness of their knowledge limitations and AI models, which lack such introspection, making their pronouncements even more dangerous.

The researchers further investigated the specific impact of vaccine misinformation, discovering that corrupting just 0.001% of the training data with this type of misinformation led to a nearly 5% increase in harmful content generated by the poisoned models. Astonishingly, this manipulation could be achieved with only 2,000 fabricated articles, generated by ChatGPT at a negligible cost of $5. This finding underscores the alarmingly low cost of launching such attacks, making them accessible even to individuals with limited resources. The researchers estimate that similar attacks on larger language models could be executed for under $1,000.

To combat this threat, the researchers developed a fact-checking algorithm designed to identify medical misinformation in AI-generated text. This algorithm compares medical phrases against a biomedical knowledge graph, achieving a detection rate of over 90% for the misinformation generated by the poisoned models. However, this solution is not a panacea, serving as a temporary measure rather than a definitive fix. Alber emphasizes the necessity of rigorous evaluation of medical AI chatbots through well-designed, randomized controlled trials before their deployment in patient care settings. This cautious approach is crucial to ensuring patient safety and maintaining public trust in the face of this emerging threat.

The ease and low cost of poisoning AI models with medical misinformation pose a serious challenge to the responsible development and deployment of AI in healthcare. While fact-checking algorithms can provide a degree of protection, the fundamental vulnerability of these models necessitates a multifaceted approach. Rigorous validation through clinical trials, coupled with ongoing research into more robust AI architectures and training methodologies, is essential. The potential of AI in healthcare remains immense, but it must be realized responsibly, prioritizing patient safety and safeguarding against the deliberate spread of harmful misinformation. The future of AI in medicine hinges on our ability to address this critical challenge.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Conflicting Reports Surround Shefali Jariwala’s Passing; Abhishek Bachchan Addresses Misinformation.

July 6, 2025

The Potential for Misuse of AI Chatbots in the Dissemination of Credible-Appearing Health Misinformation

July 5, 2025

Social Security Administration Email Containing Inaccurate Information Regarding Tax Bill Criticized.

July 5, 2025

Our Picks

Criminalizing Fossil Fuel Disinformation: A Necessary Step to Protect Human Rights, Says UN Climate Expert

July 6, 2025

US Embassy Refutes Reports of Urging Citizens to Depart Azerbaijan

July 5, 2025

The Potential for Misuse of AI Chatbots in the Dissemination of Credible-Appearing Health Misinformation

July 5, 2025

Social Security Administration Email Containing Inaccurate Information Regarding Tax Bill Criticized.

July 5, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Social Media Impact

The Influence of Sports Social Media Health Communication on Adolescent Sport Participation Attitudes: A Moderated Mediation Analysis

By Press RoomJuly 5, 20250

The Rise of Social Media and its Impact on Physical Activity: A Comprehensive Review The…

Private Sector Seeks Media Partnership to Advance Sustainability and Social Impact

July 5, 2025

Dissemination of Misinformation Regarding Operation Sindoor and the 2025 Bihar Elections.

July 5, 2025

Misinformation Regarding Bowel Cancer Symptoms Prompts Young Mother’s Story After Misdiagnosis.

July 5, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.