Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

The Infiltration of Foreign Disinformation into the American Mainstream

July 10, 2025

AI-Generated Video Featuring Fabricated Imagery of President Akufo-Addo and Serwaa Broni Debunked.

July 10, 2025

Russian Disinformation Campaign Exacerbated Divisions Regarding Hurricanes in the United States

July 10, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»AI Chatbots Exhibit Pro-Russian Propaganda Biases.
Disinformation

AI Chatbots Exhibit Pro-Russian Propaganda Biases.

Press RoomBy Press RoomMarch 10, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Kremlin-Backed Disinformation Campaign Corrupts Western AI Chatbots

A sophisticated Moscow-based disinformation network, known as Pravda (meaning "truth" in Russian), has successfully infiltrated prominent Western AI chatbots, injecting them with pro-Kremlin propaganda, according to a new study by NewsGuard. This marks a significant escalation in the ongoing information war, as the group targets AI algorithms rather than human audiences directly. Pravda operates as an elaborate propaganda laundering machine, drawing content from Russian state media, pro-Kremlin influencers, and official government sources, and disseminating it through a network of seemingly independent websites. By flooding search engine results and web crawlers with these fabricated narratives, Pravda manipulates the data ingested by large language models (LLMs), the underlying technology powering AI chatbots, effectively poisoning their informational outputs.

NewsGuard’s investigation revealed that this disinformation deluge has resulted in Western AI systems absorbing an estimated 3.6 million propaganda articles in 2024 alone. The study evaluated ten leading chatbots, including OpenAI’s ChatGPT-4, Google’s Gemini, and Microsoft’s Copilot, exposing them to 15 false narratives propagated across 150 Pravda-affiliated websites. The results were alarming: all ten chatbots regurgitated disinformation originating from the Pravda network, with seven directly citing Pravda articles as legitimate sources. In total, 56 out of 450 chatbot responses incorporated links to Pravda’s fabricated claims, referencing 92 distinct articles. This manipulation confirms a previous warning issued by the American Sunlight Project (ASP) about Pravda’s deliberate "LLM grooming" strategy, designed to manipulate AI models for propaganda purposes.

The NewsGuard report identified 207 demonstrably false claims disseminated by the network, encompassing fabricated stories about US bioweapons labs in Ukraine and unfounded allegations against Ukrainian President Volodymyr Zelensky. This tactic of remotely corrupting Western AI systems poses a substantial challenge for tech companies striving to maintain accuracy and impartiality in their chatbot outputs. The ease with which Pravda has contaminated these systems raises serious questions about the vulnerabilities of AI to manipulation and the potential for its misuse in spreading disinformation. The findings underscore the urgency for enhanced safeguards and detection mechanisms to prevent AI from becoming a tool for propaganda dissemination.

Pravda, also known as Portal Kombat, launched in April 2022, following Russia’s invasion of Ukraine. Since then, the network has expanded its reach to target 49 countries, employing multiple languages and operating across 150 different domains. The French government agency, Viginum, identified the network in February 2024, connecting its activities to TigerWeb, an IT company based in Russian-occupied Crimea. This revelation further solidifies the link between Pravda and Russian state-sponsored disinformation efforts. The network’s sophisticated operation, spanning multiple languages and platforms, highlights the coordinated and resource-intensive nature of this disinformation campaign.

The implications of this study extend beyond the immediate manipulation of AI chatbots. The successful infiltration of these systems demonstrates the potential for disinformation campaigns to exploit the inherent vulnerabilities of AI technology. As AI becomes more integrated into our daily lives, from information retrieval to decision-making processes, the risk of manipulation through disinformation campaigns like Pravda’s poses a significant threat to democratic discourse and informed public opinion. The incident underscores the need for increased scrutiny of AI systems and the development of robust countermeasures to mitigate the spread of disinformation.

In related news, Signal President Meredith Whittaker voiced concerns about the burgeoning field of agentic AI at the SXSW conference, cautioning against potential threats to user privacy and security. Whittaker likened the concept of AI agents – software programs designed to perform tasks on users’ behalf – to "putting your brain in a jar," highlighting the inherent vulnerabilities associated with granting AI extensive access to personal information. She illustrated her point with a seemingly innocuous example: an AI agent tasked with booking concert tickets would require access to web browsers, credit card details, calendars, and messaging apps to complete the task. This level of access, Whittaker argued, creates significant privacy risks, especially when applied to sensitive communication platforms like Signal, which prioritize end-to-end encryption. Whittaker’s concerns echo broader anxieties surrounding the rapid advancement of AI and the potential for its misuse. Her warnings urge a more cautious approach to AI development and deployment, emphasizing the need to prioritize user privacy and security in the face of technological advancements.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

The Infiltration of Foreign Disinformation into the American Mainstream

July 10, 2025

Russian Disinformation Campaign Exacerbated Divisions Regarding Hurricanes in the United States

July 10, 2025

Appeals Court Rules Shared Fake Memes Insufficient Evidence of Election Disinformation Conspiracy

July 10, 2025

Our Picks

AI-Generated Video Featuring Fabricated Imagery of President Akufo-Addo and Serwaa Broni Debunked.

July 10, 2025

Russian Disinformation Campaign Exacerbated Divisions Regarding Hurricanes in the United States

July 10, 2025

Western Australia News: Inquiry into Albany Death as Cook Addresses LNG Concerns

July 10, 2025

Appeals Court Rules Shared Fake Memes Insufficient Evidence of Election Disinformation Conspiracy

July 10, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Western Australia News: Inquiry into Albany Death as Cook Addresses LNG Concerns

By Press RoomJuly 10, 20250

WA News LIVE: Cook rails against LNG ‘misinformation’; Police probe woman’s death in Albany Perth,…

The Correlation Between Social Media, Body Image, and Self-Esteem

July 10, 2025

Attorney Challenges Vaccine Misinformation and Disinformation in HHS Lawsuit

July 10, 2025

The Political and Scientific Controversy Surrounding Teenage Social Media Bans

July 10, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.