Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Dissemination of Misinformation Regarding the SCO Summit, Punjab Floods, and Other Current Events

September 6, 2025

Comments Disabled to Prevent Abuse and the Spread of Misinformation.

September 6, 2025

Analyzing the Dissemination of Climate Misinformation via Social Media

September 6, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»Platforms Identify Election Disinformation in Recent Study
Disinformation

Platforms Identify Election Disinformation in Recent Study

Press RoomBy Press RoomDecember 28, 2024
Facebook Twitter Pinterest LinkedIn Tumblr Email

Social Media Platforms Show Improvement in Blocking Election Disinformation, But Challenges Remain

In the lead-up to the 2024 US Presidential election, concerns regarding the spread of disinformation on social media platforms reached a fever pitch. A pre-election investigation conducted by independent researchers raised serious questions about the efficacy of content moderation policies on platforms like TikTok and YouTube. This investigation involved submitting eight ads containing demonstrably false election information, including claims about online voting and incitements to violence against election workers. These ads were deliberately crafted using "algospeak," substituting letters with numbers and symbols, to mimic tactics employed by malicious actors seeking to circumvent platform safeguards.

The initial results were concerning. TikTok approved half of the disinformation-laden ads, raising alarms about the platform’s vulnerability to manipulation. YouTube also approved 50% of the ads, but crucially, required personal identification for publication, thus creating a higher barrier to entry for those seeking to spread disinformation. Following the October investigation, TikTok acknowledged the policy violations, attributing the approvals to errors and promising to refine their detection mechanisms.

To assess the impact of this pledge, researchers resubmitted the identical eight ads to both platforms. The results demonstrated a marked improvement. TikTok rejected all eight ads, including those previously approved, indicating a positive shift in their content moderation practices. YouTube suspended the researchers’ account due to their suspicious payments policy and flagged half the ads for unreliable claims or US election advertising, thus requiring further verification. None of the ads were approved. Importantly, the researchers ensured that none of the ads went live, preventing the spread of actual disinformation.

While these improved results offer a glimmer of hope, it’s crucial to acknowledge the limited scope of this test. The ads were identical to those previously submitted, presenting a relatively straightforward challenge for the platforms’ moderation systems. Nevertheless, the positive outcome underscores the vital role of independent scrutiny in holding social media platforms accountable. The ability of journalists, academics, and NGOs to conduct such tests is paramount, particularly in a climate where governmental threats to disinformation efforts and platform restrictions on transparency tools are increasing.

The broader context surrounding this issue is complex. Meta’s earlier shutdown of Crowdtangle, a valuable tool for tracking social media trends, exemplifies the challenges faced by researchers seeking to monitor platform activity. Users deserve assurance that platforms are actively filtering out false and misleading election information within paid advertising. The onus should not be on individuals to fact-check every piece of information they encounter online, especially when presented within seemingly legitimate advertising spaces. Organizations dedicated to media integrity must continue to rigorously evaluate the effectiveness of platform policies and hold them to their stated commitments.

Finally, it’s important to acknowledge that the positive results observed in the US context do not necessarily reflect the global situation. Previous investigations have revealed significant shortcomings in TikTok and YouTube’s content moderation practices in elections held in other countries, including India and Ireland. The threat of election disinformation remains a persistent challenge, and a comprehensive solution requires social media platforms to dedicate adequate resources to content moderation across all jurisdictions in which they operate. Until then, the vulnerability of democratic processes to manipulation remains a significant concern.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

AI Chatbot Vulnerability: An Examination of Safety Measure Failures in Preventing the Generation of False Content

September 6, 2025

G7 Leaders and Officials Convene to Address Disinformation and Polarization

September 6, 2025

Amazonian Journalists Confront Violence, Resource Deficiencies, and Disinformation

September 6, 2025

Our Picks

Comments Disabled to Prevent Abuse and the Spread of Misinformation.

September 6, 2025

Analyzing the Dissemination of Climate Misinformation via Social Media

September 6, 2025

Social Media Landscape in 2025

September 6, 2025

Wyndham Hotel Development in Springfield Delayed by Allegations of Misinformation

September 6, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Social Media

Client Obstacle

By Press RoomSeptember 6, 20250

Website Loading Error Halts User Experience, Highlights Growing Reliance on JavaScript and Complex Web Architecture…

Robert F. Kennedy Jr. Disseminates Vaccine Misinformation During Congressional Testimony

September 6, 2025

AI Chatbot Vulnerability: An Examination of Safety Measure Failures in Preventing the Generation of False Content

September 6, 2025

Influencer Marketing: A Novel Approach to Combating Health Misinformation

September 6, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.