Social Media Platforms Profit from Election Disinformation Despite Policies
As the 2024 US general election draws near, political advertising on social media platforms has become a focal point of concern. A recent study by the Institute for Strategic Dialogue (ISD) reveals a troubling trend: despite having policies in place to prevent the spread of election disinformation, major platforms like Meta, X (formerly Twitter), and Google are profiting from ads that promote false and misleading claims about election integrity. This investigation analyzed ads run on these platforms, as well as Snapchat, between April and September 2024, a crucial period when many voters begin to engage with election-related news.
The ISD’s research employed varied methodologies to access and analyze ad data across the platforms, due to the inconsistencies in data availability and functionality across platforms. For Meta (Facebook and Instagram), the researchers utilized the platform’s Ads API. X’s publicly available political ads repository was queried using a third-party tool. Manual searches were conducted on Google (YouTube and search) due to limitations in the platform’s ad library. Finally, a sample of ads was reviewed from Snap’s publicly available political ads database.
The study’s findings paint a stark picture of lax enforcement. While each platform boasts policies designed to curb the amplification of false election narratives, the ISD found numerous instances of ads containing unsubstantiated claims that likely violated these policies. These ads perpetuated false narratives about insecure mail-in voting and baseless predictions of “rigged” or “stolen” elections. Disturbingly, Meta, X, and Google all permitted ads linking immigration enforcement concerns to election integrity, often featuring false assertions about non-citizen voting, which directly contravene platform policies against undermining public trust in elections.
The financial implications of this lapse in enforcement are substantial. The report identified five advertisers – The Heritage Foundation, Judicial Watch, TheBlaze, Honest Elections Project Fund, and The Daily Caller – who collectively spent over $600,000 on Meta platforms promoting ads with election integrity keywords. Though not all of these ads were deemed to violate Meta’s policies, the significant investment in casting doubt on election security raises serious concerns. X, while a smaller player in the political ad market, also allowed ads promoting election denial narratives, including those from the National Republican Senatorial Campaign Committee (NRSC), demonstrating the reach of these misleading messages.
Further complicating the issue is X’s news publisher exemption. This loophole allows media outlets to run ads with false or misleading election information without disclosing crucial data like spending or targeting parameters. The Washington Times, for example, used this exemption to run ads with unfounded claims about non-citizen voting, reaching millions of views without any transparency regarding the ad campaign’s funding or targeting. This exemption effectively allows news organizations to circumvent the safeguards designed to prevent the spread of disinformation, highlighting a critical vulnerability in X’s policies.
Google, despite its stated commitment to “responsible political advertising,” also fell short in enforcing its policies. Researchers found ads from Judicial Watch that falsely accused Democrats of intentionally neglecting election integrity measures, directly contradicting Google’s policy against undermining trust in democratic processes. These examples of policy violations across multiple platforms underscore a troubling disregard for the potential consequences of election disinformation.
The scale of spending on these misleading ads highlights the symbiotic relationship between platforms and advertisers seeking to sow distrust in elections. Meta, projected to earn over $61 billion in ad revenue in 2024, has profited significantly from these ads, accepting hundreds of thousands of dollars from groups pushing false narratives about election fraud and non-citizen voting. The Brennan Center’s analysis, cited in the report, indicates substantial spending on political ads on Google and Meta, totaling $619 million, with a considerable portion dedicated to the presidential race.
The ISD’s findings expose the disconnect between platforms’ stated commitments to combating election disinformation and their actual enforcement practices. While Snapchat emerged as the only platform where researchers did not identify any ads violating its policies, the widespread presence of misleading ads on Meta, X, and Google paints a troubling picture. These platforms are not only profiting from the spread of disinformation but also contributing to the polarization of public opinion regarding election security. As voters head to the polls, the unchecked dissemination of false narratives through paid advertising on these influential platforms has the potential to undermine faith in democratic processes and erode trust in the electoral system.
The findings underline a critical need for greater transparency and accountability in political advertising on social media. Platforms must strengthen their enforcement mechanisms and close loopholes that allow bad actors to exploit the system. As the primary avenue for political advertising in the digital age, these platforms bear a significant responsibility to ensure the integrity of the information shared on their platforms. Failure to do so risks further eroding public trust in both the electoral process and the platforms themselves. The ISD’s research serves as a stark reminder of the urgent need for proactive measures to safeguard the democratic process from the manipulative influence of disinformation campaigns.
The prevalence of misleading political advertising on major social media platforms, as highlighted by the ISD report, raises important questions about the effectiveness of self-regulation in the tech industry. The findings suggest that current policies are insufficient to prevent the spread of disinformation, particularly during critical periods like elections. This emphasizes the need for more robust oversight, whether through enhanced platform accountability measures, increased regulatory scrutiny, or a combination of both.
Moreover, the report underscores the importance of media literacy and critical thinking skills for users navigating the digital landscape. Voters must be equipped to discern credible information from misleading or false narratives. This requires ongoing efforts to promote media literacy education and empower individuals to critically evaluate the information they encounter online, particularly during election cycles. The responsibility to combat disinformation ultimately rests not only on platforms but also on a collective societal effort to foster a more informed and discerning public.
The long-term consequences of unchecked disinformation campaigns on social media can be far-reaching, potentially impacting voter turnout, shaping public opinion on critical issues, and undermining trust in democratic institutions. The findings of this study underscore the need for a multi-faceted approach to address the problem, involving platforms, policymakers, civil society organizations, and individual users.
Furthermore, the report’s focus on the financial incentives driving the spread of disinformation highlights the need for greater transparency in online political advertising. Clearer disclosure requirements regarding ad spending, targeting parameters, and the identity of advertisers can help shed light on the forces behind disinformation campaigns and empower users to make more informed decisions about the information they consume.
Finally, the study serves as a call to action for social media companies to prioritize the integrity of their platforms over profit. While the financial gains from political advertising are undeniable, they must not come at the expense of democratic values and public trust. The platforms must commit to robust enforcement of their policies, closing loopholes that allow the spread of disinformation, and investing in research and development to better identify and mitigate these malicious campaigns. The future of informed democratic discourse depends on it.
The ISD study’s findings are particularly relevant in the context of increasing political polarization and the erosion of trust in traditional media sources. As social media becomes an increasingly dominant source of information for many voters, the spread of disinformation through these platforms can have a disproportionately large impact on public opinion and electoral outcomes.
The report’s methodology, which involved diverse approaches to data collection and analysis across different platforms, highlights the challenges researchers face in studying the phenomenon of online disinformation. The lack of standardized data access and reporting mechanisms across platforms makes it difficult to gain a comprehensive understanding of the problem and evaluate the effectiveness of different intervention strategies. This underscores the need for greater cooperation between platforms and researchers to facilitate access to data and enable more comprehensive analysis.