The Decentralized Threat of Disinformation: Emerging Platforms and the Future of Elections
The integrity of democratic elections faces a growing threat from online information manipulation, amplified by the rapid advancement and proliferation of artificial intelligence. This year, as billions head to the polls in over 50 countries, the challenge isn’t confined to familiar battlegrounds like Facebook and Twitter (now X). It’s spreading to a decentralized network of smaller, emerging platforms, many of which lack the resources or even the inclination to combat the spread of disinformation. This fragmented digital landscape poses a significant hurdle for efforts to preserve the integrity of democratic processes worldwide.
While established social media platforms grapple with content moderation and the rollback of safeguards against harmful content, platforms like Telegram, Mastodon, and Bluesky present a new frontier for information manipulation. These platforms, often lacking robust moderation policies and resources, are vulnerable to exploitation by malicious actors seeking to undermine elections and erode public trust. The shift towards a decentralized online space, though offering greater platform diversity, makes tracking and preventing election misinformation significantly more complex. The absence of consistent oversight across these platforms creates a fertile ground for the next generation of information manipulation tactics, posing a critical threat to democratic values.
Evidence of this decentralized threat is mounting. In Kenya’s 2022 elections, TikTok was identified as a conduit for rapidly spreading political disinformation, exploiting fears of post-election violence. Platforms with concentrated user bases in specific countries, like Kakao in South Korea or Line in Japan, operate with varying and often inadequate content moderation policies, further complicating the challenge. The evolving landscape of social media, with platforms like Discord, Clubhouse, and Twitch gaining prominence, contributes to the diffusion of information across a vast and interconnected network, making comprehensive monitoring increasingly difficult.
The structural changes at X have accelerated the rise of alternatives like Mastodon and Bluesky, while platforms like Gab cater to specific ideological niches. The fluidity of this online environment, with users migrating between platforms and online communities, necessitates a multi-faceted approach to combatting information manipulation. Cases like Iran’s 2021 election, where Clubhouse was used for political debates despite privacy concerns, and Brazil’s 2022 election, where Telegram became a hotbed of disinformation, highlight the urgent need for proactive measures to safeguard electoral processes. These examples underscore the need for preparedness, coordination, and robust response mechanisms within social media companies to address influence operations and malign actors.
Combating this evolving threat requires a collective effort from civil society, academics, investors, and the platforms themselves. Open-source governance models, drawing inspiration from platforms like Wikipedia and Reddit, can help smaller platforms build sustainable trust and safety systems. Open-source impact assessments, like those developed by Indiana University’s Observatory on Social Media, can track and report disinformation spreaders, providing valuable data for advocacy and policy changes. Civil society organizations can play a crucial role in conducting risk assessments and implementing crisis protocols, drawing on models like the Meta Oversight Board and the Christchurch Call to Action to provide multistakeholder oversight.
Principled investment can incentivize emerging platforms to prioritize democratic values from the outset. Investors and donors can require startups to commit to transparency, robust content moderation, and user privacy as preconditions for funding. Recent actions by Bluesky and OpenAI demonstrate the potential influence of investors in shaping platform policies related to hate speech and human rights. Furthermore, leveraging "democracy bots" programmed to flag inauthentic accounts and deepfakes can offer a proactive defense against manipulation tactics. Platforms can further incentivize this development by recognizing and rewarding developers who incorporate democratic principles into their bot designs.
Encouraging participation in existing multi-stakeholder initiatives like the Global Network Initiative, which promotes principles of transparency, privacy, and freedom of expression, can help establish norms and accountability. App stores, like Apple’s and Google’s, wield significant power by setting content governance requirements for the apps they host. While these approaches have limitations, they offer valuable tools for influencing platform behavior and promoting democratic principles. Ultimately, fostering a democratic online ecosystem requires a diverse and coordinated approach, recognizing the unique challenges posed by the ever-evolving digital landscape. Addressing these challenges proactively will be crucial to safeguarding the integrity of elections and protecting the future of democracy worldwide. It is equally critical to support the development and growth of alternative platforms adhering to democratic principles as a counterbalance to authoritarian-controlled platforms that actively undermine those values.