The Evolution of Russian Disinformation: AI, Corruption Narratives, and Western Funding
As the war in Ukraine grinds into its fourth year, the Russian disinformation machine has adapted and intensified its efforts. NewsGuard has identified and debunked over 300 false narratives related to the conflict, revealing a clear evolution in both content and tactics. Initially, the focus was on denying civilian casualties and promoting the “denazification” narrative. However, as the war progressed, the Kremlin’s propaganda shifted toward accusations of Ukrainian corruption, Zelensky’s supposed unpopularity, and the alleged misuse of Western aid. This shift coincides with increased Western support for Ukraine and aims to undermine public confidence in both Ukrainian leadership and the efficacy of international assistance.
One of the most striking developments is the increased reliance on artificial intelligence. While AI-generated disinformation was relatively rare in the first year of the war, its use has exploded since then. AI has become a force multiplier for Russian propagandists, allowing them to create more persuasive and widespread campaigns. Deepfakes, AI-generated videos and articles, and fabricated news reports impersonating credible Western media outlets like the BBC and CNN are becoming increasingly sophisticated. The quality of deepfakes depicting Zelensky, for example, has drastically improved, making them more difficult to detect. This technological advancement allows for the rapid dissemination of compelling false narratives across multiple platforms and languages, reaching wider audiences and potentially influencing public opinion.
Furthermore, AI has enabled the creation of fake news websites that mimic legitimate media outlets. These websites, often powered by AI-generated content, further amplify disinformation narratives and add a veneer of credibility to otherwise unsubstantiated claims. For instance, a fake E! News video segment, featuring an AI-generated voiceover, falsely accused USAID of funding celebrity visits to Kyiv. This video garnered millions of views and was shared by prominent figures, demonstrating the potential reach and impact of AI-driven disinformation. This specific campaign has been attributed to Matryoshka, a known Russian influence operation, highlighting the Kremlin’s direct involvement in leveraging AI for propaganda purposes.
The shift in narrative focus from Nazism to corruption allegations appears strategically aligned with the increasing flow of Western aid to Ukraine. As billions of dollars in aid have been provided, Russian disinformation efforts aim to sow distrust in the Ukrainian government’s handling of these funds. False claims about Zelensky and his wife using aid money for luxury purchases, including mansions, yachts, and sports cars, have proliferated online, accumulating tens of millions of views. This narrative seeks to portray Ukrainian leadership as corrupt and irresponsible, potentially eroding international support for continued assistance.
A key figure in this disinformation push is John Mark Dougan, an American ex-law enforcement officer turned pro-Kremlin propagandist. Dougan, linked to the Russian influence operation Storm-1516, is behind many of the corruption allegations targeting Zelensky. His campaigns employ a range of tactics, from fabricated documents and fake phone conversations to AI-generated videos and images. He leverages a network of phony local news sites to disseminate these falsehoods, often targeting specific audiences and tailoring the content to resonate with their existing biases. The reach and impact of Dougan’s campaigns have grown significantly over time, demonstrating the effectiveness of his methods.
Dougan’s operations have become increasingly sophisticated, incorporating AI technology to generate content and amplify its distribution. He has even boasted about developing his own “censorship-free” AI server and pushing Russian narratives to influence global AI models. This indicates a concerted effort to not only spread disinformation but also shape the very algorithms that power online information dissemination. This poses a significant challenge to combating disinformation, as it could potentially contaminate the information ecosystem at its core.
A troubling aspect of this disinformation landscape is the unwitting funding provided by Western brands through programmatic advertising. Many websites spreading Russian disinformation generate revenue from ads placed automatically by ad tech platforms like Google. This means that major brands, often unknowingly, are financially supporting the dissemination of these false narratives. This highlights the need for greater transparency and accountability in the programmatic advertising ecosystem and underscores the responsibility of brands to ensure their ads are not appearing on websites promoting harmful content. Despite policies against funding disinformation, the continued appearance of ads from reputable brands on these sites suggests a failure to effectively implement these policies or a lack of sufficient monitoring. The irony is not lost in cases where ads from organizations supporting Ukraine, like the UNHCR, appear on sites promoting pro-Russian disinformation.
This complex web of disinformation, fueled by evolving technology, shifting narratives, and unwitting financial support, poses a serious threat to informed public discourse and international stability. Addressing this challenge requires a multi-pronged approach, including greater media literacy among consumers, improved detection and debunking of false narratives, stricter enforcement of platform policies against disinformation, and greater corporate responsibility in advertising practices. The escalating use of AI in disinformation campaigns further necessitates international collaboration and technological innovation to counter these evolving threats. As the war in Ukraine continues, so too will the evolution of disinformation tactics, demanding vigilance and proactive measures to mitigate their impact.