Russian Disinformation Network Manipulating Western AI Chatbots to Spread Pro-Kremlin Propaganda
In a concerning development, researchers have discovered a sprawling Russian disinformation network manipulating Western AI chatbots to disseminate pro-Kremlin propaganda. This revelation comes at a sensitive time, with reports suggesting the United States has temporarily halted its cyber operations against Moscow. The network, known as Pravda or Portal Kombat, is a well-resourced operation based in Moscow, dedicated to spreading pro-Russian narratives globally. It achieves this by flooding large language models (LLMs), the foundation of AI chatbots, with a deluge of pro-Kremlin falsehoods, effectively poisoning the information pool from which the chatbots learn.
A study conducted by disinformation watchdog NewsGuard examined ten leading AI chatbots and found a disturbing trend. These chatbots regurgitated falsehoods originating from the Pravda network over 33% of the time, effectively promoting a pro-Moscow agenda. This finding highlights a significant escalation in the disinformation threat. It is no longer merely a matter of AI models passively absorbing disinformation circulating online. Instead, it involves the deliberate targeting of chatbots to manipulate their output and reach a wider audience, a tactic researchers have dubbed “LLM grooming.” The sheer volume of pro-Russian propaganda, estimated at 3.6 million articles in 2024 alone, now embedded within Western AI systems, poses a serious threat to the integrity of information.
The American Sunlight Project, a non-profit organization, corroborates NewsGuard’s findings in a separate study. They warn about the escalating reach of the Pravda network and its likely contamination of the training data used by large language models. The Pravda network operates with an unprecedented scale and sophistication, raising alarms about the potential for widespread manipulation of public opinion. Nina Jankowicz, chief executive of the American Sunlight Project, emphasizes the gravity of the situation, stating that the network’s unchecked expansion poses a direct threat to democratic discourse worldwide. The potential for this disinformation to become even more pervasive is heightened by a reported pause in US cyber operations against Russia.
The timing of this pause raises concerns among experts. Reports indicate that Defense Secretary Pete Hegseth ordered a halt to all US cyber operations against Russia, encompassing both active operations and the planning of future offensive actions. This decision reportedly stems from a broader reassessment of US strategy against Moscow, although the duration and scope of the pause remain unclear. The Pentagon declined to comment on the reports. However, the timing of the pause, coinciding with President Donald Trump’s push for negotiations to end the three-year war in Ukraine and a contentious White House meeting with Ukrainian President Volodymyr Zelensky, has fueled speculation about the motivations behind the decision.
The Pravda network, launched in April 2022 after Russia’s full-scale invasion of Ukraine, has rapidly expanded its operations, now covering 49 countries and dozens of languages. Its modus operandi involves aggregating content from various sources, including Russian state media and pro-Kremlin influencers, rather than producing original material. Millions of articles laden with pro-Russian propaganda, including demonstrably false claims such as the alleged operation of US-run bioweapons labs in Ukraine, are then amplified across social media platforms like X, Telegram, and Bluesky. The network’s effectiveness in manipulating AI chatbots is evident in NewsGuard’s study, where all ten tested chatbots, including prominent names like OpenAI’s ChatGPT-4, Microsoft’s Copilot, and Google Gemini, repeated disinformation disseminated by the Pravda network. Furthermore, seven of the chatbots directly cited specific Pravda articles as their sources, underscoring the depth of the infiltration.
The manipulation extends to a variety of false narratives. For instance, following unsubstantiated claims circulating on social media that President Zelensky had banned Truth Social after facing criticism from Donald Trump, six of the ten chatbots presented this fabricated narrative as factual, often citing Pravda articles as evidence. This incident highlights the vulnerability of AI chatbots to manipulation and their potential to unwittingly spread disinformation. Furthermore, John Mark Dougan, a US fugitive turned Kremlin propagandist, boasted about leveraging AI as a tool for manipulating global narratives, stating that AI is not something to fear but rather a tool to be exploited. His comments underscore the deliberate and strategic nature of the disinformation campaign and the potential for its long-term impact on the information landscape. The implications of this coordinated manipulation of AI are far-reaching, potentially undermining public trust in information sources and further blurring the lines between fact and fiction. The situation calls for urgent attention and collaborative efforts to develop robust countermeasures to protect the integrity of information in the age of AI.