Russian Disinformation Campaign Exploits AI Chatbots to Spread Pro-Kremlin Propaganda

A sophisticated Russian disinformation network, known as the Pravda network or "Portal Kombat," is actively manipulating Western AI chatbots to disseminate pro-Kremlin propaganda, raising concerns about the vulnerability of these advanced language models to manipulation. Researchers have discovered that these chatbots frequently repeat falsehoods originating from the Pravda network, effectively amplifying pro-Moscow narratives and potentially influencing public opinion. This tactic, dubbed "LLM grooming," involves flooding large language models (LLMs) with biased information to skew their responses.

The Pravda network, a well-resourced operation based in Moscow, has significantly expanded its reach since its inception in April 2022, following Russia’s full-scale invasion of Ukraine. It now covers 49 countries and publishes content in dozens of languages. The network does not create original content but aggregates and disseminates articles from sources like Russian state media and pro-Kremlin influencers. This content, often laden with false claims and conspiracy theories, is then amplified through social media platforms like X, Telegram, and Bluesky, further extending its reach.

A study conducted by NewsGuard, a disinformation watchdog, examined ten leading AI chatbots, including prominent models like OpenAI’s ChatGPT-4, Microsoft’s Copilot, and Google Gemini. The results revealed a disturbing trend: these chatbots repeated disinformation originating from the Pravda network over 33% of the time. Even more concerning, seven of the ten chatbots directly cited Pravda articles as sources for their responses, demonstrating the network’s success in infiltrating the knowledge base of these AI systems. This manipulation highlights the danger posed by "LLM grooming," which goes beyond simply absorbing existing online disinformation; it involves a deliberate and targeted effort to poison the data these chatbots learn from.

The consequences of this manipulation can be significant. For example, NewsGuard tested the chatbots by asking them why Ukrainian President Volodymyr Zelensky banned Truth Social, Donald Trump’s social media platform. Although Truth Social hadn’t launched in Ukraine, and no ban existed, six chatbots presented the fabricated narrative as fact, often citing Pravda articles as evidence. This example illustrates how manipulated chatbots can spread false narratives and potentially influence public perception of real-world events.

The American Sunlight Project, a non-profit organization, has also raised alarms about the Pravda network’s growing influence and its potential to contaminate the training data of large language models. The Project’s chief executive, Nina Jankowicz, emphasized the threat posed by such sophisticated disinformation campaigns to democratic discourse worldwide, particularly given the network’s ability to spread disinformation at an unprecedented scale and potentially influence AI systems. This potential for large-scale manipulation, coupled with the rapid advancement of AI technology, presents a serious challenge to the integrity of information and the fight against disinformation.

Experts are further concerned by reports of a potential pause in US cyber operations against Russia, including offensive actions. This reported pause, ordered by Defense Secretary Pete Hegseth, is part of a broader reevaluation of US operations against Moscow and coincides with President Donald Trump’s push for negotiations to end the war in Ukraine. While the Pentagon declined to comment on the reports, the timing of the pause raises concerns about the potential for an increase in disinformation campaigns and the diminished capacity to counter them effectively. This reduced oversight, coupled with active disinformation campaigns, leaves Western AI systems vulnerable to manipulation.

The Pravda network’s strategy aligns with the stated intentions of figures like John Mark Dougan, a US fugitive turned Kremlin propagandist, who has openly advocated for leveraging AI to spread pro-Russian narratives. Quoted by NewsGuard, Dougan believes that influencing worldwide AI through these narratives is not something to fear but a tool to be exploited. This explicit endorsement of manipulating AI for propaganda purposes underscores the seriousness of the threat and the need for robust countermeasures to safeguard the integrity of information in the age of artificial intelligence. The challenge facing developers and researchers is to develop strategies to identify and mitigate the impact of such coordinated disinformation campaigns on AI systems. The stakes are high, as the success or failure of these efforts could significantly impact the future of information integrity and democratic discourse.

Share.
Exit mobile version