Social Media Platforms Fail to Fully Curb Election Disinformation Ahead of 2024 US Presidential Election

A new investigation by Global Witness has revealed alarming vulnerabilities in the ability of major social media platforms, namely YouTube, Facebook, and TikTok, to detect and remove harmful election disinformation in the lead-up to the 2024 US presidential election. This comes just two years after a similar investigation exposed similar flaws, raising concerns about the platforms’ commitment to safeguarding electoral integrity. The investigation tested the platforms’ moderation systems by submitting ads containing various forms of election disinformation, including false voting information, voter suppression tactics, threats against election workers, questioning candidate eligibility, and inciting violence. These ads were designed using "algospeak," a tactic employing numbers and symbols to bypass content moderation filters.

TikTok emerged as the worst performer, approving 50% of the disinformation-laden ads despite an explicit ban on all political advertising. While this represents an improvement from the 90% approval rate in the 2022 midterm elections, it remains a significant concern given TikTok’s growing influence and its popularity among young voters. The platform’s detection system appeared ineffective, failing to identify ads containing clear disinformation as long as they didn’t mention candidates by name. TikTok attributed the approvals to errors in its machine moderation system and emphasized the multi-layered review process, including human moderation, that ads undergo before publication. They committed to using the findings to improve future detection and reiterated their ongoing efforts to enhance policy enforcement.

Facebook demonstrated a marked improvement compared to previous performance, rejecting seven out of eight submitted ads. However, the one approved ad falsely claimed that a driver’s license is required for voting, underscoring the continuing presence of vulnerabilities in their systems. While Facebook’s performance in the US has improved, its struggles to curb disinformation in other elections, such as in Brazil, raise concerns about the consistency of its global enforcement efforts. The lack of response from Meta, Facebook’s parent company, to requests for comment further fuels these concerns.

YouTube, while rejecting half of the submitted ads, also presented a complex picture. The platform paused the testing account and requested further verification, leaving unanswered the question of whether the remaining ads would have been approved had verification been provided. Notably, only one rejection was explicitly based on "unreliable claims," suggesting a potential gap in identifying disinformation unless directly related to election processes. Like Facebook, YouTube’s inconsistencies in tackling disinformation across different countries, particularly its failure to detect disinformation in ads related to the Indian elections, highlight the need for more robust global enforcement. A Google spokesperson highlighted their multi-layered review process and commitment to continuous improvement in policy enforcement.

The investigation’s findings underscore the urgent need for increased content moderation capabilities and robust integrity systems across all platforms, not just in the US, but globally. Properly resourcing content moderation efforts, including fair wages and psychological support for moderators, is crucial. Platforms must also proactively assess and mitigate risks to human rights and societal harms, publish transparency reports detailing their election integrity initiatives, and allow independent audits for accountability.

Specific recommendations for Meta include strengthening ad account verification processes and urgently bolstering its content moderation systems. For TikTok, the immediate priority is enhancing its systems for identifying political content and enforcing existing rules, along with a significant upgrade to its disinformation detection capabilities. Ultimately, the responsibility rests on these platforms to prioritize the protection of democratic processes worldwide by implementing comprehensive and consistent measures against election disinformation. The upcoming US presidential election serves as a critical test for their commitment to this responsibility.

Share.
Exit mobile version