EU’s Digital Services Act Falls Short of Expectations in Protecting Civic Discourse and Electoral Integrity
The European Union’s Digital Services Act (DSA), enacted to rein in the power of large online platforms, mandated these platforms to conduct risk assessments regarding their impact on civic discourse and elections. However, initial reports submitted by these platforms in late 2024 have been met with disappointment. Analyses reveal these reports to be largely self-serving, vague, and lacking in substantive detail, failing to address the core issues they were intended to tackle. This article delves into the shortcomings of these assessments, particularly regarding their impact on the integrity of civic discourse and electoral processes.
A key aim of the DSA is to safeguard democratic civic discourse, defined as a process where individuals inform themselves and engage in reasoned discussions about socio-political matters. This process relies on inclusivity, respect for differing viewpoints, a commitment to factual information, and meaningful citizen engagement. The DSA sought to address risks such as online echo chambers, the spread of misinformation, and suppression of legitimate speech. In addition to general risks to civic discourse, elections present specific vulnerabilities, including the spread of manipulative political advertising, foreign interference, and inconsistencies in content moderation practices.
This analysis focuses on six major platforms: Facebook, Instagram, Meta, Google, YouTube, TikTok, and X (formerly Twitter), due to their significant influence on online discourse. These platforms, despite their public commitments to safeguarding democracy, have demonstrated a concerning reluctance to acknowledge their role in shaping civic discourse and potentially influencing electoral outcomes. Their risk assessments largely evade these crucial issues.
Meta’s reports for Facebook and Instagram, while voluminous, offer little substance. They frame risks primarily in terms of harmful user behavior rather than acknowledging the structural issues inherent in their platforms. This allows them to focus on reactive measures like content removal instead of addressing how their algorithms might amplify certain viewpoints or contribute to the formation of echo chambers. The near-identical nature of the reports for Facebook and Instagram, despite their different user demographics and engagement dynamics, further underscores their superficiality.
Google, while acknowledging certain risks to civic discourse, overwhelmingly prioritizes election-related disinformation. The company’s risk assessment fails to adequately address how its search algorithms and YouTube recommendations shape public understanding of political issues. Similarly, YouTube’s assessment primarily focuses on disinformation and election integrity, overlooking concerns regarding algorithmic amplification, echo chambers, and the spread of extremist content.
TikTok’s report acknowledges disinformation as a risk but downplays its overall impact, despite evidence suggesting the platform’s growing political influence. The platform fails to sufficiently address the role of influencers in shaping political narratives and the issue of political advertising, which continues to surface despite a purported ban. The assessment also lacks in-depth exploration of the platform’s feed and recommendation systems and their potential impact on civic discourse.
X, formerly Twitter, adopts a particularly defiant stance, essentially rejecting accountability for how its design choices shape democratic engagement. It even frames content moderation as a risk to free speech, emphasizing its purported mission to enable unrestricted interaction, even for offensive or controversial content. This approach is particularly concerning in light of research highlighting how X’s algorithms can disproportionately amplify extremist viewpoints.
Overall, the platforms’ risk assessments fall significantly short of expectations. They focus narrowly on election-related disinformation while neglecting broader systemic risks to civic discourse. This narrow focus allows platforms to implement superficial measures without addressing the underlying structural problems within their systems. A more comprehensive and standardized risk assessment process, with meaningful stakeholder engagement, is crucial to understanding and mitigating the online challenges to democratic processes. At a time of increasing threats to democracy, it is paramount to leverage all available tools to protect a healthy and robust civic discourse and ensure the integrity of electoral processes. The DSA, while a step in the right direction, requires more rigorous enforcement and a broader perspective from the platforms it seeks to regulate. The platforms must acknowledge their role in shaping online discourse and take proactive steps to mitigate the risks they create, rather than simply reacting to individual instances of harmful content.