Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Russian Disinformation Campaign Targets Moldova’s Upcoming Elections

September 25, 2025

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»Russian Disinformation Campaign Compromises AI Systems
Disinformation

Russian Disinformation Campaign Compromises AI Systems

Press RoomBy Press RoomMarch 7, 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

Moscow’s Pravda Network Weaponizes AI for Global Disinformation Campaign

A chilling new report by NewsGuard reveals a sophisticated disinformation operation orchestrated by the Moscow-based Pravda network, exploiting the vulnerabilities of artificial intelligence (AI) systems to disseminate pro-Kremlin narratives globally. This campaign goes beyond traditional disinformation tactics, targeting the very algorithms that power leading AI chatbots, effectively transforming these tools into unwitting agents of Russian propaganda. The audit found that AI chatbots repeated false claims originating from the Pravda network a staggering 33% of the time, highlighting a critical weakness in AI’s ability to discern and filter misleading information.

The Pravda network’s strategy involves flooding the internet with fabricated narratives and distorted information. This tactic exploits the reliance of AI models on vast datasets of publicly available information for training. By saturating this data with pro-Kremlin content, the network effectively "grooms" large language models (LLMs) to absorb and reproduce Russian propaganda. This insidious approach bypasses the need to directly target human audiences, instead manipulating the very source of information that many are increasingly relying upon – AI chatbots. NewsGuard’s investigation, encompassing ten prominent AI chatbots including OpenAI’s ChatGPT-4, Google’s Gemini, and Microsoft’s Copilot, demonstrated the alarming effectiveness of this strategy. When presented with 15 known false narratives, these AI systems not only repeated the misinformation but, in some instances, even cited Pravda-affiliated sources as legitimate news outlets.

The scale of this operation is vast. The Pravda network, comprising approximately 150 domains publishing in multiple languages, churned out 3.6 million articles in 2024 alone, targeting 49 countries. While the network’s websites receive minimal direct human traffic, its primary objective appears to be influencing AI models rather than cultivating a substantial readership. This further underscores the calculated nature of the campaign, prioritizing manipulation of AI systems as a primary vector for disseminating disinformation. The network primarily amplifies existing Kremlin narratives sourced from Russian state media, pro-Kremlin influencers, and government agencies, rather than generating original content.

A key figure in this operation is John Mark Dougan, a U.S. fugitive turned pro-Kremlin propagandist operating from Moscow. Dougan’s remarks at a 2025 conference, attended by Russian officials, explicitly outlined the strategy: "By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI." This statement chillingly echoes the findings of the NewsGuard audit, confirming that AI models are increasingly susceptible to Russian-influenced narratives due to systematic LLM grooming.

The Pravda network’s reach extends far beyond Russia’s borders, operating in numerous languages and targeting specific regions with tailored disinformation campaigns. Approximately 40 websites publish in Russian, focusing on Ukraine through domains like News-Kiev.ru and Donetsk-News.ru. Roughly 70 sites target European audiences with content in English, French, Czech, Irish, and Finnish, while others cater to audiences in Africa, the Middle East, and Asia. This multifaceted approach demonstrates a coordinated effort to manipulate global information ecosystems through AI infiltration. Independent analysis by Viginum, a French government agency specializing in monitoring foreign disinformation, traced the network’s operations to TigerWeb, a Crimea-based IT firm with links to Yevgeny Shevchenko, a web developer connected to the Russian-backed Crimean administration, further solidifying the connection to Russian state-sponsored activity.

NewsGuard’s study paints a concerning picture of the susceptibility of leading AI models to misinformation. While the models successfully debunked the false narratives in 48.22% of cases, they repeated Russian disinformation 33.55% of the time and offered no response in 18.22% of instances. This failure to consistently filter out Kremlin-backed narratives highlights the significant political and social risks posed by this form of AI manipulation. Examples include the propagation of false claims, such as the fabricated story of Ukrainian President Volodymyr Zelensky banning Donald Trump’s Truth Social platform in Ukraine, which was repeated by six AI models, often citing Pravda-affiliated sources. Similarly, the false narrative of Azov Battalion fighters burning an effigy of Trump was disseminated by four of the tested chatbots. The constant creation of new domains by the Pravda network further complicates efforts by AI companies to block the disinformation at its source. Moreover, the network’s strategy of republishing existing Kremlin-aligned narratives, rather than creating original content, makes it challenging to curb the spread of false claims simply by removing their websites. Russian President Vladimir Putin himself has acknowledged the strategic importance of AI in information warfare, criticizing Western generative AI models for alleged bias and advocating for increased investment in Russian AI development, further highlighting the geopolitical implications of this technological battleground.

Experts warn that without proactive intervention from AI companies, the exploitation of AI-generated responses as tools of foreign propaganda will continue. Potential solutions include developing more sophisticated filtering mechanisms to identify and remove disinformation, increasing transparency regarding AI training data and sources, and establishing partnerships with independent fact-checking organizations to verify AI-generated content. As generative AI becomes an increasingly integral part of how we access and process information, its vulnerability to disinformation poses a critical challenge for policymakers, AI developers, and the public alike. Failing to address this issue risks allowing AI to become an unwitting amplifier of state-backed propaganda, potentially reshaping global discourse to serve authoritarian agendas.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Russian Disinformation Campaign Targets Moldova’s Upcoming Elections

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025

Contested Transitions: The Siege of Electoral Processes

September 25, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025

The Impact of Flagged Misinformation on Social Media Engagement

September 25, 2025

Paige Bueckers’ On-Court Impact Drives Historic Social Media Milestone with Dallas Wings

September 25, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Contested Transitions: The Siege of Electoral Processes

By Press RoomSeptember 25, 20250

Moldova’s Democracy Under Siege: A Deep Dive into the Information War Moldova, a small Eastern…

Navigating Misinformation: Introducing “The Reality Check” Series

September 25, 2025

Telegram Serves as Primary News Source for Half of Ukrainian Population, Survey Reveals

September 25, 2025

Obama Denounces Trump’s Dissemination of Harmful Misinformation Regarding Autism and Tylenol.

September 25, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.