Social Media Platforms Show Improvement in Blocking Election Disinformation, But Challenges Remain
In the lead-up to the 2024 US Presidential election, concerns regarding the spread of disinformation on social media platforms reached a fever pitch. A pre-election investigation conducted by independent researchers raised serious questions about the efficacy of content moderation policies on platforms like TikTok and YouTube. This investigation involved submitting eight ads containing demonstrably false election information, including claims about online voting and incitements to violence against election workers. These ads were deliberately crafted using "algospeak," substituting letters with numbers and symbols, to mimic tactics employed by malicious actors seeking to circumvent platform safeguards.
The initial results were concerning. TikTok approved half of the disinformation-laden ads, raising alarms about the platform’s vulnerability to manipulation. YouTube also approved 50% of the ads, but crucially, required personal identification for publication, thus creating a higher barrier to entry for those seeking to spread disinformation. Following the October investigation, TikTok acknowledged the policy violations, attributing the approvals to errors and promising to refine their detection mechanisms.
To assess the impact of this pledge, researchers resubmitted the identical eight ads to both platforms. The results demonstrated a marked improvement. TikTok rejected all eight ads, including those previously approved, indicating a positive shift in their content moderation practices. YouTube suspended the researchers’ account due to their suspicious payments policy and flagged half the ads for unreliable claims or US election advertising, thus requiring further verification. None of the ads were approved. Importantly, the researchers ensured that none of the ads went live, preventing the spread of actual disinformation.
While these improved results offer a glimmer of hope, it’s crucial to acknowledge the limited scope of this test. The ads were identical to those previously submitted, presenting a relatively straightforward challenge for the platforms’ moderation systems. Nevertheless, the positive outcome underscores the vital role of independent scrutiny in holding social media platforms accountable. The ability of journalists, academics, and NGOs to conduct such tests is paramount, particularly in a climate where governmental threats to disinformation efforts and platform restrictions on transparency tools are increasing.
The broader context surrounding this issue is complex. Meta’s earlier shutdown of Crowdtangle, a valuable tool for tracking social media trends, exemplifies the challenges faced by researchers seeking to monitor platform activity. Users deserve assurance that platforms are actively filtering out false and misleading election information within paid advertising. The onus should not be on individuals to fact-check every piece of information they encounter online, especially when presented within seemingly legitimate advertising spaces. Organizations dedicated to media integrity must continue to rigorously evaluate the effectiveness of platform policies and hold them to their stated commitments.
Finally, it’s important to acknowledge that the positive results observed in the US context do not necessarily reflect the global situation. Previous investigations have revealed significant shortcomings in TikTok and YouTube’s content moderation practices in elections held in other countries, including India and Ireland. The threat of election disinformation remains a persistent challenge, and a comprehensive solution requires social media platforms to dedicate adequate resources to content moderation across all jurisdictions in which they operate. Until then, the vulnerability of democratic processes to manipulation remains a significant concern.