Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

The Infiltration of Foreign Disinformation into the American Mainstream

July 10, 2025

AI-Generated Video Featuring Fabricated Imagery of President Akufo-Addo and Serwaa Broni Debunked.

July 10, 2025

Russian Disinformation Campaign Exacerbated Divisions Regarding Hurricanes in the United States

July 10, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»Western AI Chatbots Susceptible to Russian Propaganda Influence: Study Findings
Disinformation

Western AI Chatbots Susceptible to Russian Propaganda Influence: Study Findings

Press RoomBy Press RoomMarch 10, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Western AI Chatbots Unwittingly Spread Russian Propaganda, NewsGuard Research Reveals

A new study by NewsGuard has revealed a concerning vulnerability in Western AI chatbots: their susceptibility to Russian propaganda. The research found that leading AI models are inadvertently repeating false narratives disseminated by a Moscow-based disinformation network known as "Pravda," meaning "truth" in Russian. This network, which published a staggering 3.6 million articles in 2024 alone, is exploiting the way AI systems learn, effectively "grooming" them to regurgitate pro-Kremlin misinformation. The study highlights a critical challenge for the tech industry as AI becomes increasingly integrated into daily life.

NewsGuard’s audit of 10 prominent AI chatbots revealed that they repeated Pravda’s false narratives a disturbing 33% of the time. Even more alarmingly, seven of the chatbots directly cited Pravda websites as legitimate sources. While the specific AI models tested were not disclosed, NewsGuard analyst Isis Blachez confirmed that the problem is widespread. Blachez emphasized that Russia appears to be shifting its disinformation tactics away from directly targeting human readers and towards manipulating AI models for broader reach and more insidious impact.

This new tactic, dubbed "LLM grooming" by NewsGuard, involves deliberately flooding the datasets used to train AI models with disinformation. Large Language Models (LLMs), like ChatGPT, Claude, Gemini, Grok 3, and Perplexity, learn by analyzing vast quantities of text and code. By injecting large volumes of propaganda into these datasets, Pravda aims to bias the AI outputs towards pro-Russian perspectives. This manipulation is subtle and difficult to detect, making it a particularly insidious threat. The user unknowingly receives biased information, unaware of the underlying manipulation.

Pravda’s strategy is both methodical and extensive. The network boasts a sprawling web of 150 websites publishing in dozens of languages across 49 countries. This vast operation generates over 20,000 articles every 48 hours, overwhelming AI systems with a deluge of misinformation. This "firehose of falsehoods" makes it challenging for AI companies to effectively filter out the propaganda without risking the inadvertent censorship of legitimate content. The sheer scale of Pravda’s network and the volume of content it produces pose a serious obstacle to maintaining the integrity of AI-generated information.

The implications of this manipulation are significant. As AI tools become more integrated into daily life, from search engines to news aggregators, the potential for foreign actors to influence public perception grows exponentially. One example cited in the report is the false claim that Ukrainian President Volodymyr Zelensky banned Donald Trump’s Truth Social app in Ukraine. Six of the 10 chatbots in the study repeated this falsehood, some even citing Pravda articles as their source. This demonstrates how easily manipulated narratives can spread through AI systems and potentially reach a vast audience.

NewsGuard stresses the urgency for AI companies to develop more robust verification and content-sourcing practices. Simply blocking Pravda websites is insufficient as the network continuously expands with new domains and subdomains. Blachez warns that without adequate safeguards, AI platforms risk becoming unwitting conduits for Kremlin propaganda. Users, too, have a role to play by critically evaluating AI-generated information and cross-checking information from multiple sources, especially for sensitive or news-related topics. Tools like NewsGuard’s Misinformation Fingerprints can help users identify and avoid unreliable sources. The report highlights the growing threat of AI manipulation and the need for both developers and users to be vigilant against the spread of disinformation. The future of informed decision-making depends on it.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

The Infiltration of Foreign Disinformation into the American Mainstream

July 10, 2025

Russian Disinformation Campaign Exacerbated Divisions Regarding Hurricanes in the United States

July 10, 2025

Appeals Court Rules Shared Fake Memes Insufficient Evidence of Election Disinformation Conspiracy

July 10, 2025

Our Picks

AI-Generated Video Featuring Fabricated Imagery of President Akufo-Addo and Serwaa Broni Debunked.

July 10, 2025

Russian Disinformation Campaign Exacerbated Divisions Regarding Hurricanes in the United States

July 10, 2025

Western Australia News: Inquiry into Albany Death as Cook Addresses LNG Concerns

July 10, 2025

Appeals Court Rules Shared Fake Memes Insufficient Evidence of Election Disinformation Conspiracy

July 10, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Western Australia News: Inquiry into Albany Death as Cook Addresses LNG Concerns

By Press RoomJuly 10, 20250

WA News LIVE: Cook rails against LNG ‘misinformation’; Police probe woman’s death in Albany Perth,…

The Correlation Between Social Media, Body Image, and Self-Esteem

July 10, 2025

Attorney Challenges Vaccine Misinformation and Disinformation in HHS Lawsuit

July 10, 2025

The Political and Scientific Controversy Surrounding Teenage Social Media Bans

July 10, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.