Bluesky: A Haven from X’s Toxicity or a Breeding Ground for New Dangers?
In the wake of Donald Trump’s election and the subsequent surge of radical rhetoric on X (formerly Twitter), a new social media platform, Bluesky, has emerged as a potential refuge for those seeking more reasoned discourse. Launched in 2023, Bluesky has witnessed explosive growth, attracting around 10 million users, many of whom are scientists, journalists, experts, and activists, largely leaning left. They sought respite from the increasingly toxic environment fostered by Elon Musk’s ownership of X, especially after his alignment with the Republican president-elect. However, Bluesky’s promise of a healthier online space is being challenged by the infiltration of familiar faces from X’s darker corners. Far-right disinformation accounts, purveyors of manipulative narratives, and even fabricators of conspiracy theories have established a presence on the platform, raising concerns about its ability to maintain its intended atmosphere of rational discussion.
The arrival of these malicious actors has not gone unnoticed. Users like the infamous far-right account AuBonTouiteFrançais have openly declared their intention to "fuck shit up" and reunite with their fact-checking adversaries who fled X. They are joined by figures like Xavier Azalbert, known for his anti-vaccine rhetoric during the Covid-19 pandemic, and Pierre Sautarel, whose "Fdesouche" press review often carries xenophobic undertones. Even more concerning is the presence of individuals like Aurélien Poirson-Atlan and Zoé Sagan, who have actively spread fabricated stories, including the myth of Brigitte Macron’s supposed transsexuality. These individuals, along with an army of anonymous trolls, have brought their divisive rhetoric to Bluesky, targeting "soya men" (a derogatory term for left-leaning men), "media hacks," and anyone deemed a "degenerate pseudo-progressive." Their presence has injected a familiar strain of toxicity into the platform, threatening to undermine its promise of a more constructive online environment.
Bluesky’s community, however, has not passively accepted this influx of malicious actors. The platform’s design, with its emphasis on user control and community moderation, allows for a more proactive approach to combating disinformation and harmful content. Unlike X’s algorithm, which often amplifies sensationalist and divisive content, Bluesky provides tools for users to filter and block problematic material. Collaborative features, such as community notes and shared blocklists, empower users to collectively identify and suppress misinformation. This community-driven approach, while promising, also presents potential vulnerabilities. The reliance on community trust can be exploited, and the sense of shared purpose can sometimes lead to lapses in vigilance.
The arrival of disinformation actors has been met with immediate pushback from the Bluesky community. Users have actively employed the platform’s moderation tools, utilizing blocklists and filters to limit the spread of harmful content. This stands in contrast to X, where the algorithm often elevates sensationalist and divisive material. Bluesky’s design prioritizes user control and community moderation, enabling a more proactive defense against misinformation. Features like customizable content filters, collaborative alerts on AI-generated content, and community notes provide mechanisms for users to collectively identify and address misleading posts. This user-centric approach represents a significant departure from the algorithmic amplification that often exacerbates the spread of disinformation on other platforms.
While Bluesky’s community-driven moderation system offers significant advantages, it is not without its vulnerabilities. The platform’s reliance on trust among users can be exploited, as demonstrated by the incident involving a purported "CSAM Blocklist" that actually targeted users expressing support for the LGBT community. This incident highlights the potential for bad actors to manipulate the system and spread misinformation under the guise of community protection. Furthermore, even well-intentioned users can fall prey to misinformation, as illustrated by the prestigious journal Nature inadvertently using a fake AI-generated image. These incidents underscore the need for ongoing vigilance and critical evaluation, even within a community-moderated environment.
Beyond the battle against disinformation, Bluesky has also encountered challenges with illegal content, particularly the sharing of child pornography. While such content remains marginal compared to the platform’s overall user base, its presence has been seized upon by critics, particularly Elon Musk and his supporters, to paint Bluesky as a haven for illicit activity. This criticism, while often exaggerated, underscores the ongoing challenges faced by any online platform in combating illegal content and maintaining a safe environment for its users. As Bluesky continues to grow and evolve, it will need to address these vulnerabilities while preserving the core values of community and user control that distinguish it from its predecessors. The platform’s future success will depend on its ability to strike a delicate balance between fostering open discussion and effectively mitigating the risks posed by misinformation and harmful content.