Russian Disinformation Network Manipulates Western AI Chatbots to Spread Pro-Kremlin Propaganda

A sophisticated Russian disinformation operation known as the Pravda network is exploiting Western AI chatbots to disseminate pro-Kremlin propaganda, raising concerns about the vulnerability of these powerful language models to manipulation. This development comes at a sensitive time, with reports suggesting the United States has temporarily halted its cyber operations against Russia. The Pravda network, a well-resourced operation based in Moscow, aims to influence global narratives by flooding large language models (LLMs) with a deluge of pro-Russian falsehoods. This tactic, dubbed "LLM grooming" by researchers, goes beyond simply picking up existing disinformation online; it involves deliberately targeting chatbots to amplify propaganda and reach a broader audience.

A recent investigation by NewsGuard, a disinformation watchdog, revealed the alarming extent of this manipulation. Their study of ten leading AI chatbots, including prominent models like OpenAI’s ChatGPT-4, Microsoft’s Copilot, and Google Gemini, found that these systems repeated Pravda-sourced falsehoods over 33% of the time, effectively advancing a pro-Moscow agenda. The sheer volume of pro-Russian articles produced by the Pravda network – estimated at 3.6 million in 2024 – contributes to the contamination of training data, influencing the responses generated by these AI systems. This allows false claims and propaganda to be presented as factual information by seemingly trustworthy AI assistants.

The American Sunlight Project, a non-profit organization, has also sounded the alarm about the growing reach of the Pravda network, sometimes referred to as "Portal Kombat," and the likelihood its content is polluting the training data of LLMs. This raises significant concerns about the integrity of information accessed through these increasingly popular AI tools. The ability of the Pravda network to disseminate disinformation on such a massive scale, coupled with its potential to influence AI systems, represents a serious threat to democratic discourse worldwide, as noted by Nina Jankowicz, chief executive of the American Sunlight Project.

The potential for this disinformation to become more pervasive is heightened by the reported pause in US cyber operations against Russia. Multiple media outlets reported that Defense Secretary Pete Hegseth ordered a halt to all cyber operations targeting Russia, including the planning of offensive actions. While the Pentagon has declined to comment on these reports, the alleged pause comes amid President Donald Trump’s push for negotiations to end the ongoing conflict in Ukraine and follows a contentious meeting between Trump and Ukrainian President Volodymyr Zelensky. This pause in cyber operations, if confirmed, could create a more permissive environment for disinformation campaigns.

The Pravda network, established in April 2022 following Russia’s full-scale invasion of Ukraine, has rapidly expanded its reach to encompass 49 countries and numerous languages. Its operations involve aggregating content from sources like Russian state media and pro-Kremlin influencers, rather than producing original material. This content, often containing false narratives such as allegations of US-operated bioweapons labs in Ukraine, is amplified through various social media platforms like X, Telegram, and Bluesky. The network’s strategy leverages the credibility of AI chatbots to legitimize and disseminate these falsehoods to a wider audience.

NewsGuard’s study highlighted how effectively the Pravda network has infiltrated these AI systems. All ten chatbots tested repeated disinformation originating from Pravda, with seven even citing specific Pravda articles as their sources. In one example, six of the chatbots presented a fabricated narrative about Zelensky banning Truth Social, a social media platform, as factual, often citing Pravda articles as evidence. This demonstrates the susceptibility of chatbots to manipulation and their potential to unwittingly spread disinformation. Furthermore, the research revealed that chatbots also repeated false narratives promoted by John Mark Dougan, a US fugitive and Kremlin propagandist, who has openly advocated for leveraging AI as a tool to spread pro-Russian narratives. This highlights the deliberate and strategic nature of the campaign to exploit AI for disinformation purposes.

Share.
Exit mobile version