EU Flexes Regulatory Muscle to Safeguard Elections Against Digital Threats
In the lead-up to the June 2024 European Parliament elections, the European Union (EU) leveraged its significant market power and regulatory framework to bolster transparency and mitigate electoral risks stemming from online platforms. The Digital Services Act (DSA), fully implemented in February 2024, mandated large platforms and search engines to submit detailed transparency reports, conduct risk assessments, and grant researchers access to their data. Further reinforcing its stance, the European Commission issued election guidelines in April 2024, outlining specific measures under the DSA, including labeling political ads and AI-generated content, and ensuring adequate resources for internal election-related teams. The Commission also initiated formal proceedings against Meta and X (formerly Twitter) for potential DSA violations, including Meta’s alleged noncompliance with regulations on deceptive electoral advertising and X’s shortcomings in mitigating election-related risks. Complementing the DSA, the EU’s Code of Practice on Disinformation, a voluntary agreement, encouraged signatories, including major platforms and advertisers, to proactively debunk and label manipulated content, establish transparency centers, and demonetize false or misleading information. These efforts aimed to equip voters with reliable information for informed electoral decisions, although the voluntary nature of the Code raised concerns about its effectiveness and enforceability.
Bridging the Gap: Information Sharing Between Governments and Tech Companies
Effective collaboration between democratic governments and tech companies, with appropriate oversight and free speech safeguards, can significantly enhance access to credible information for users. Government agencies often possess insights into foreign interference activities that can be invaluable to platforms combating cyberattacks or coordinated disinformation campaigns. However, in the United States, cooperation between federal agencies and tech platforms faced setbacks in the crucial period leading up to the November 2024 elections due to legal challenges. Lawsuits filed by Louisiana and Missouri, alleging that government interaction with tech companies constituted censorship, led to a scaling back of information sharing. The Supreme Court ultimately dismissed the case in June 2024, citing lack of proven harm and "clearly erroneous" facts in the lower court’s judgment. While the ruling lacked specific guidance on permissible communication between agencies and platforms within free speech boundaries, the FBI subsequently announced plans to enhance transparency and establish clearer guidelines for its engagement with tech companies.
Empowering Voters: Fact-Checking and Digital Literacy Initiatives
Numerous initiatives emerged globally during the coverage period to bolster access to authoritative information through fact-checking programs, centralized resource hubs, and digital literacy training. Taiwan set a global benchmark with its transparent, decentralized, and collaborative approach to fact-checking and disinformation research. During the January 2024 elections, programs like Cofacts, a crowdsourced fact-checking platform, played a crucial role in building trust in online information across the political spectrum. Cofacts identified a prevalence of false narratives regarding Taiwan’s foreign relations, particularly with the United States, circulating on the messaging platform Line. Other civil society organizations like IORG and Fake News Cleaner further strengthened resilience against disinformation through community outreach and educational programs.
Coalitions and Governmental Support for Information Integrity
In India, a coalition of over 50 fact-checking groups and news publishers, the Shakti Collective, formed the largest initiative of its kind in the country’s history. The collective tackled false information and deepfakes, translated fact-checks into numerous Indian languages, and strengthened capacity for fact-checking and AI-generated content detection. The diverse membership of the Shakti Collective enabled it to reach a wide range of voters and identify emerging trends, such as the rise of false claims in regional languages about rigged electronic voting machines. Government support for such initiatives also played a significant role. The European Digital Media Observatory (EDMO), established by the EU, conducted research and partnered with fact-checking and media literacy organizations. EDMO exposed a Russia-linked disinformation network operating fake websites in several EU languages and found that generative AI was used in a relatively small percentage of detected false narratives. Mexico’s National Electoral Institute (INE) launched the Certeza INE 2024 project, offering voters a platform to report suspicious content through a virtual assistant on WhatsApp, which was then fact-checked by a partnership including Meedan, Agence France-Press, Animal PolĂtico, and Telemundo.
Challenges and Limitations of Fact-Checking
Fact-checkers are often at the forefront of identifying disinformation trends, actors, and technologies, providing crucial insights for policy and programmatic interventions. While research demonstrates the effectiveness of fact-checking in certain contexts, it may not always translate into widespread behavioral changes among users. A key challenge remains the inherent asymmetry between fact-checkers and disinformation actors: debunking false claims requires significantly more time and effort than creating and spreading them. These initiatives face particular hurdles in highly polarized environments, where skepticism towards independent media undermines the credibility of fact-checking efforts.
Regulating Generative AI in the Electoral Landscape
Concerns about generative AI blurring the lines between truth and falsehood during elections prompted regulatory action in at least 11 countries. These regulations aim to curb problematic uses of generative AI, such as impersonation, encouraging responsible behavior from political campaigns. Labeling requirements provide voters with crucial transparency to distinguish between authentic and fabricated content, further strengthening information integrity in the digital age.