AI’s Muted Impact on the 2024 US Elections: Old Misinformation Tactics Prevail
The 2024 US presidential election was widely anticipated to be the first significantly influenced by the rise of artificial intelligence, particularly in disseminating misinformation. Fears abounded regarding the potential for AI-generated deepfakes to manipulate voters and disrupt the democratic process. The FCC’s ban on AI-generated robocalls after a voice clone mimicking President Biden was used in New Hampshire set the stage for widespread concern. Sixteen states enacted legislation to address AI’s use in campaigns, primarily focusing on disclaimer requirements for synthetic media. The Election Assistance Commission even issued an “AI toolkit” to guide election officials in navigating the potential influx of AI-generated misinformation.
Despite these preparations, the expected deluge of AI-driven falsehoods never materialized. While misinformation played a prominent role, particularly concerning vote counting and mail-in ballots, it largely relied on established techniques such as text-based social media claims, deceptively edited videos, and out-of-context images. Experts observed that the election landscape was not significantly altered by sophisticated AI manipulations. Traditional methods of spreading falsehoods proved sufficient to fuel existing narratives and sow discord.
AI-generated content that did gain traction predominantly reinforced pre-existing narratives rather than introducing entirely new fabrications. For instance, following false claims by Donald Trump and JD Vance regarding Haitians eating pets in Ohio, AI-generated images and memes depicting animal abuse proliferated online. This served to amplify an already established false narrative rather than create a wholly new deception. This suggests that the most effective misinformation campaigns leveraged AI to bolster existing biases and anxieties, not to craft novel, convincing falsehoods.
Experts attribute the relatively muted influence of AI in the election to a combination of factors. Firstly, proactive measures by technology platforms and policymakers appear to have played a crucial role. Meta, TikTok, and OpenAI implemented safeguards, including disclosure requirements, content labels, and outright bans on using their tools for political campaigning. These interventions, along with legislative efforts by various states, significantly limited the potential for AI-generated harmful political speech. Public awareness campaigns also contributed to a more discerning electorate, less susceptible to manipulation through manipulated media.
Secondly, traditional misinformation techniques remained effective. Prominent figures with large followings, such as Donald Trump, were able to disseminate false claims through established channels like speeches, media interviews, and social media posts, negating the need for sophisticated AI-generated content. Trump’s repeated false claims about non-citizen voting, despite being debunked, gained traction among segments of the population, demonstrating the continuing power of conventional misinformation tactics.
Thirdly, the cost and technical expertise required to create compelling deepfakes proved a barrier. While cheap fakes, or deceptively edited authentic content, proliferated, high-quality deepfakes were less common. The case of the AI-generated Biden robocall in New Hampshire, while alarming, proved to be an isolated incident rather than a harbinger of widespread deepfake deployment. The perpetrator, a street magician, created the audio with minimal resources, highlighting the relative ease of creating low-quality synthetic media compared to sophisticated deepfakes.
The political landscape also witnessed instances of politicians deflecting criticism by blaming AI. Trump, for example, falsely attributed a montage of his gaffes to AI, and similarly dismissed a crowd of Kamala Harris supporters as AI-generated. This tactic of deflecting blame onto AI represents a novel form of manipulating the narrative around its potential influence. Rather than using AI directly to spread misinformation, some politicians opted to exploit public anxieties about AI to discredit unfavorable content and portray it as fabricated.
Analysis of political deepfakes by researchers at Purdue University revealed that the majority were created for satirical purposes, followed by those intended to damage reputations and for entertainment. Deepfakes targeting political candidates primarily served as “extensions of traditional U.S. political narratives,” reinforcing existing biases rather than introducing novel claims. This finding underscores the importance of understanding the interplay between AI-generated content and existing political discourse. AI, in this context, acted as an amplifier of established narratives rather than a catalyst for entirely new forms of misinformation.
While foreign interference remained a concern, AI did not play a revolutionary role in those efforts. Foreign actors relied on traditional tactics such as employing actors in staged videos, rather than sophisticated deepfakes, to spread disinformation and undermine election integrity. The Foreign Malign Influence Center noted that foreign adversaries would face significant hurdles in utilizing AI for election interference, including overcoming technical limitations, evading detection, and effectively targeting and disseminating AI-generated content.
Though AI’s impact was less pronounced than anticipated, its potential for future manipulation remains a significant concern. Technology platforms are continuously developing safeguards, including watermarks, labels, and fact-checks, to mitigate the spread of harmful content. However, challenges remain, as evidenced by reports of AI tools still capable of generating targeted campaign messages. The ongoing evolution of AI technology necessitates continuous adaptation and refinement of these safeguards to stay ahead of potential misuse. The 2024 election served as a crucial learning experience, underscoring the need for continued vigilance, research, and collaboration between policymakers, technology companies, and the public to combat the evolving threat of misinformation in the digital age.