Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

The UK’s Superficial Approach to Human Rights

September 14, 2025

British Columbia Human Rights Commissioner Launches Initiative to Combat Misinformation and Disinformation

September 14, 2025

The Influence of Discord and Social Media on the Narrative of Tyler Robinson

September 14, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»Study Reveals Doubling of Misinformation Spread by Leading AI Chatbots.
News

Study Reveals Doubling of Misinformation Spread by Leading AI Chatbots.

Press RoomBy Press RoomSeptember 14, 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

AI Chatbots Now Twice as Likely to Spread Disinformation, Study Finds

The rapid advancement of artificial intelligence has brought about remarkable innovations, but it has also opened the door to new challenges. A recent study by Newsguard, a news rating and misinformation tracking organization, has revealed a concerning trend: leading AI chatbots are now twice as likely to spread false information compared to just a year ago. This alarming rise in misinformation dissemination raises serious concerns about the reliability and trustworthiness of these increasingly popular tools. The study found that the ten largest generative AI tools now repeat misinformation about current news topics in 35% of cases, a stark increase from 18% a year prior. This doubling of the error rate comes despite improvements in debunking rates and a complete disappearance of instances where the bots refuse to answer questions.

The primary driver of this surge in misinformation, according to Newsguard, is a significant trade-off made by chatbot developers. In an effort to provide more comprehensive and up-to-date responses, real-time web search functionality was integrated into these AI systems. While this allowed the bots to move away from simply refusing to answer difficult questions – a tactic that was frustrating for users and limited the utility of the tools – it also opened them up to the vast and often polluted online information ecosystem. Previously, the bots would often decline to answer prompts related to sensitive or potentially misleading topics. This refusal rate, which stood at 31% in August 2024, has now dropped to zero. Consequently, chatbots are now more likely to access and repeat false information disseminated by bad actors online.

The issue of AI-generated misinformation is not new. Last year, Newsguard identified 966 AI-generated news websites across 16 different languages, masquerading as legitimate news outlets with generic names like “iBusiness Day” while disseminating fabricated stories. These websites contribute to the polluted information landscape that chatbots now draw upon, further exacerbating the spread of misinformation. The current study underscores the urgent need to address the vulnerability of these AI systems to manipulation and exploitation by malicious actors.

Newsguard’s latest report provides a granular analysis of the performance of individual chatbot models, revealing stark differences in their susceptibility to spreading misinformation. Inflection’s model fared the worst, with a staggering 56.67% error rate, followed closely by Perplexity at 46.67%. ChatGPT and Meta’s models both repeated false claims in 40% of cases, while Copilot and Mistral registered an error rate of 36.67%. On the other end of the spectrum, Claude and Gemini demonstrated significantly better performance, with misinformation rates of 10% and 16.67%, respectively. The significant decline in Perplexity’s accuracy is particularly noteworthy. Just a year ago, it boasted a perfect 100% debunk rate, meaning it consistently identified and corrected misinformation. Its current tendency to repeat false claims nearly half the time highlights the dynamic and rapidly changing nature of this challenge.

The study also uncovered evidence of targeted campaigns by Russian disinformation networks to exploit AI chatbots. Newsguard documented how these networks systematically feed false narratives into online platforms, hoping that the bots will pick up and disseminate this misinformation. One example involved a fabricated claim about Moldovan Parliament leader Igor Grosu, originating from the pro-Kremlin Pravda network. Six out of the ten chatbots tested – Mistral, Claude, Inflection’s Pi, Copilot, Meta, and Perplexity – presented this fabricated claim as factual information. This case highlights the vulnerability of AI chatbots to coordinated disinformation campaigns and the potential for these tools to become unwitting amplifiers of propaganda. Even after Microsoft’s Copilot stopped directly quoting Pravda, it began citing the network’s social media posts on the Russian platform VK as sources, demonstrating the adaptability of these networks in their efforts to circumvent safeguards and continue spreading misinformation.

The integration of real-time web search, initially intended to enhance the accuracy and timeliness of chatbot responses, has inadvertently worsened the problem. While it resolved the issue of outdated information, it created a new vulnerability by allowing chatbots to draw from unreliable sources, often confusing legitimate news outlets with fake websites mimicking their names. This flaw, according to Newsguard, exposes a fundamental challenge: the previous strategy of refusing to answer potentially misleading questions, while frustrating for users, offered a level of safety by avoiding the spread of misinformation. Now, users are confronted with a different type of risk – a false sense of security in answers that are confidently presented but potentially based on fabricated or unreliable sources. This makes it increasingly difficult for users to distinguish fact from fiction in the ever-expanding online information ecosystem. OpenAI has acknowledged the inherent limitations of language models, recognizing that they are prone to generating “hallucinations” due to their reliance on predicting the next word in a sequence rather than verifying the truth of the information. While OpenAI claims to be working on solutions to address this issue, such as enabling future models to express uncertainty, it remains to be seen whether these approaches will be effective in preventing the spread of misinformation, especially in the context of complex and coordinated disinformation campaigns. The core challenge lies in developing AI systems that can truly discern between truth and falsehood, a task that requires a far deeper understanding of context, nuance, and the complexities of the information landscape.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

The UK’s Superficial Approach to Human Rights

September 14, 2025

The Influence of Discord and Social Media on the Narrative of Tyler Robinson

September 14, 2025

British Columbia Human Rights Commission Launches Initiative to Combat Misinformation and Disinformation

September 14, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

British Columbia Human Rights Commissioner Launches Initiative to Combat Misinformation and Disinformation

September 14, 2025

The Influence of Discord and Social Media on the Narrative of Tyler Robinson

September 14, 2025

Pro-Russian Disinformation Campaign Utilizing AI Targets Israeli Elections Following Similar Activity in Romania.

September 14, 2025

British Columbia Human Rights Commission Launches Initiative to Combat Misinformation and Disinformation

September 14, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Study Reveals Doubling of Misinformation Spread by Leading AI Chatbots.

By Press RoomSeptember 14, 20250

AI Chatbots Now Twice as Likely to Spread Disinformation, Study Finds The rapid advancement of…

Addressing Misinformation Following False Reports of Charlie Kirk’s Death in Elkhorn Schools

September 14, 2025

Witkin’s Response to Williams’s Defense

September 14, 2025

Security Sources Report Opposition Dissemination of Alleged Israeli Disinformation

September 14, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.