Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Russian Disinformation Campaign Targets Moldova’s Upcoming Elections

September 25, 2025

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»Study Reveals Doubling of Misinformation Spread by Leading AI Chatbots.
News

Study Reveals Doubling of Misinformation Spread by Leading AI Chatbots.

Press RoomBy Press RoomSeptember 14, 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

AI Chatbots Now Twice as Likely to Spread Disinformation, Study Finds

The rapid advancement of artificial intelligence has brought about remarkable innovations, but it has also opened the door to new challenges. A recent study by Newsguard, a news rating and misinformation tracking organization, has revealed a concerning trend: leading AI chatbots are now twice as likely to spread false information compared to just a year ago. This alarming rise in misinformation dissemination raises serious concerns about the reliability and trustworthiness of these increasingly popular tools. The study found that the ten largest generative AI tools now repeat misinformation about current news topics in 35% of cases, a stark increase from 18% a year prior. This doubling of the error rate comes despite improvements in debunking rates and a complete disappearance of instances where the bots refuse to answer questions.

The primary driver of this surge in misinformation, according to Newsguard, is a significant trade-off made by chatbot developers. In an effort to provide more comprehensive and up-to-date responses, real-time web search functionality was integrated into these AI systems. While this allowed the bots to move away from simply refusing to answer difficult questions – a tactic that was frustrating for users and limited the utility of the tools – it also opened them up to the vast and often polluted online information ecosystem. Previously, the bots would often decline to answer prompts related to sensitive or potentially misleading topics. This refusal rate, which stood at 31% in August 2024, has now dropped to zero. Consequently, chatbots are now more likely to access and repeat false information disseminated by bad actors online.

The issue of AI-generated misinformation is not new. Last year, Newsguard identified 966 AI-generated news websites across 16 different languages, masquerading as legitimate news outlets with generic names like “iBusiness Day” while disseminating fabricated stories. These websites contribute to the polluted information landscape that chatbots now draw upon, further exacerbating the spread of misinformation. The current study underscores the urgent need to address the vulnerability of these AI systems to manipulation and exploitation by malicious actors.

Newsguard’s latest report provides a granular analysis of the performance of individual chatbot models, revealing stark differences in their susceptibility to spreading misinformation. Inflection’s model fared the worst, with a staggering 56.67% error rate, followed closely by Perplexity at 46.67%. ChatGPT and Meta’s models both repeated false claims in 40% of cases, while Copilot and Mistral registered an error rate of 36.67%. On the other end of the spectrum, Claude and Gemini demonstrated significantly better performance, with misinformation rates of 10% and 16.67%, respectively. The significant decline in Perplexity’s accuracy is particularly noteworthy. Just a year ago, it boasted a perfect 100% debunk rate, meaning it consistently identified and corrected misinformation. Its current tendency to repeat false claims nearly half the time highlights the dynamic and rapidly changing nature of this challenge.

The study also uncovered evidence of targeted campaigns by Russian disinformation networks to exploit AI chatbots. Newsguard documented how these networks systematically feed false narratives into online platforms, hoping that the bots will pick up and disseminate this misinformation. One example involved a fabricated claim about Moldovan Parliament leader Igor Grosu, originating from the pro-Kremlin Pravda network. Six out of the ten chatbots tested – Mistral, Claude, Inflection’s Pi, Copilot, Meta, and Perplexity – presented this fabricated claim as factual information. This case highlights the vulnerability of AI chatbots to coordinated disinformation campaigns and the potential for these tools to become unwitting amplifiers of propaganda. Even after Microsoft’s Copilot stopped directly quoting Pravda, it began citing the network’s social media posts on the Russian platform VK as sources, demonstrating the adaptability of these networks in their efforts to circumvent safeguards and continue spreading misinformation.

The integration of real-time web search, initially intended to enhance the accuracy and timeliness of chatbot responses, has inadvertently worsened the problem. While it resolved the issue of outdated information, it created a new vulnerability by allowing chatbots to draw from unreliable sources, often confusing legitimate news outlets with fake websites mimicking their names. This flaw, according to Newsguard, exposes a fundamental challenge: the previous strategy of refusing to answer potentially misleading questions, while frustrating for users, offered a level of safety by avoiding the spread of misinformation. Now, users are confronted with a different type of risk – a false sense of security in answers that are confidently presented but potentially based on fabricated or unreliable sources. This makes it increasingly difficult for users to distinguish fact from fiction in the ever-expanding online information ecosystem. OpenAI has acknowledged the inherent limitations of language models, recognizing that they are prone to generating “hallucinations” due to their reliance on predicting the next word in a sequence rather than verifying the truth of the information. While OpenAI claims to be working on solutions to address this issue, such as enabling future models to express uncertainty, it remains to be seen whether these approaches will be effective in preventing the spread of misinformation, especially in the context of complex and coordinated disinformation campaigns. The core challenge lies in developing AI systems that can truly discern between truth and falsehood, a task that requires a far deeper understanding of context, nuance, and the complexities of the information landscape.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

The Impact of Flagged Misinformation on Social Media Engagement

September 25, 2025

Navigating Misinformation: Introducing “The Reality Check” Series

September 25, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025

The Impact of Flagged Misinformation on Social Media Engagement

September 25, 2025

Paige Bueckers’ On-Court Impact Drives Historic Social Media Milestone with Dallas Wings

September 25, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Contested Transitions: The Siege of Electoral Processes

By Press RoomSeptember 25, 20250

Moldova’s Democracy Under Siege: A Deep Dive into the Information War Moldova, a small Eastern…

Navigating Misinformation: Introducing “The Reality Check” Series

September 25, 2025

Telegram Serves as Primary News Source for Half of Ukrainian Population, Survey Reveals

September 25, 2025

Obama Denounces Trump’s Dissemination of Harmful Misinformation Regarding Autism and Tylenol.

September 25, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.