Executive Summary
The 2024 Social Media Safety Index (SMSI) report, focusing on LGBTQ safety, privacy, and expression online, reveals a persistent failure of major social media platforms to adequately address hate, harassment, and disinformation. Despite having policies prohibiting such content, platforms like TikTok, X (formerly Twitter), YouTube, Instagram, Facebook, and Threads consistently receive failing grades, with TikTok marginally better at a D+. This report underscores the devastating real-world consequences of online hate speech, linking it to increased hate crimes and politically motivated violence, especially targeting marginalized groups like the LGBTQ community. The report also details how anti-LGBTQ narratives are both a calculated political strategy and a profitable enterprise for right-wing figures and the tech companies hosting their content. The SMSI calls for urgent regulatory oversight to address the perverse incentives driving the corruption of our information environment, compounded by the platforms’ downsizing of content moderation teams.
Key Conclusions
The SMSI identifies several critical issues impacting LGBTQ safety online: inadequate content moderation, flawed policy enforcement, harmful and opaque algorithms, insufficient data privacy controls, and a pervasive lack of industry transparency and accountability. These issues disproportionately affect marginalized communities, particularly those at the intersection of multiple identities. The report highlights the dual problems of under-moderation of hate speech and over-moderation of legitimate LGBTQ expression. While anti-LGBTQ content proliferates, legitimate LGBTQ accounts and content are often wrongfully removed, demonetized, or shadowbanned. The confluence of these factors creates a hostile online environment for LGBTQ individuals, undermining their safety, privacy, and freedom of expression.
The report also points to the dangerous convergence of online hate and offline harm, emphasizing how hate speech online fuels real-world violence. The spread of disinformation and conspiracy theories online contributes to this problem, creating a toxic ecosystem that normalizes and encourages hate-motivated actions. Moreover, the deliberate targeting of LGBTQ people with fear-mongering and bigotry represents a cynical political strategy aimed at consolidating power and generating profit for certain individuals and groups. This dynamic necessitates immediate intervention to protect vulnerable communities from the escalating dangers of online hate.
Recommendations
The SMSI report strongly advocates for regulatory oversight of the social media industry to address the systemic failures identified. The report calls for increased transparency and accountability from platforms, requiring them to disclose their content moderation practices and algorithms. Improved enforcement of existing hate speech policies is crucial, along with development of more effective and nuanced policies that address the specific needs of marginalized communities. The report urges platforms to invest in robust content moderation resources, reversing the trend of downsizing moderation teams, and to prioritize public safety over profits. Furthermore, the report recommends implementing greater data privacy controls, giving users more agency over their information, and strengthening measures to protect against online harassment and doxing.
Finally, the SMSI calls for collaborative efforts between platforms, policymakers, researchers, and advocacy groups to develop comprehensive solutions to address the complex challenges of online hate and disinformation. This includes promoting media literacy to help users critically evaluate online content and resist manipulation. Ultimately, the report emphasizes the need for a multi-pronged approach that combines regulatory action, industry responsibility, and public awareness to create a safer and more inclusive online environment for all.
Methodology
The 2024 SMSI Platform Scorecard evaluates six major social media platforms: TikTok, X, YouTube, Instagram, Facebook, and Threads. The scorecard assesses these platforms based on a range of criteria relating to LGBTQ safety, privacy, and expression, including policy comprehensiveness, content moderation practices, algorithmic transparency, data privacy controls, and platform accountability. While the scorecard acknowledges the widespread failure of platforms to enforce their hate speech policies, it does not quantitatively assess enforcement due to methodological challenges and lack of transparency from the companies. This limitation highlights the need for greater transparency and data sharing from platforms to facilitate more comprehensive evaluation and accountability.
The SMSI report draws on a variety of sources, including:
- Analysis of platform policies and practices
- Monitoring of online hate speech and disinformation
- Research on the impact of online hate on marginalized communities
- Reports from other organizations working on online safety and human rights
- News articles and other media coverage
The report also incorporates qualitative data from LGBTQ users’ experiences on social media, providing valuable insights into the real-world impact of platform policies and practices. By combining quantitative and qualitative data, the SMSI offers a comprehensive assessment of the current state of LGBTQ safety online and provides actionable recommendations for improvement. The repeated observation of failing grades across platforms underscores the urgency and importance of addressing these issues to protect LGBTQ individuals and promote a more equitable and inclusive digital environment.
The Economy of Hate and Disinformation
The SMSI report sheds light on the disturbing reality that hate and disinformation targeting marginalized groups, including the LGBTQ community, are not only harmful but also profitable. Right-wing figures and groups who generate hateful content often benefit financially from increased engagement and visibility, while social media companies profit from the advertising revenue generated by this activity. This perverse incentive structure creates a vicious cycle where hate is amplified and monetized, further endangering vulnerable communities. This “economy of hate” underscores the need for regulatory intervention to disrupt the financial incentives that drive the spread of harmful content online. Furthermore, platforms must prioritize public safety over profit and take responsibility for addressing the harmful consequences of their business models.
Suppression of LGBTQ Content and Over-Moderation
The SMSI report identifies a concerning paradox where platforms simultaneously fail to moderate anti-LGBTQ hate speech while over-moderating legitimate LGBTQ content. This phenomenon reflects a systemic bias that disproportionately impacts LGBTQ users. While hateful content often remains unchecked, legitimate LGBTQ accounts and posts are frequently flagged, removed, demonetized, or shadowbanned. This suppression of LGBTQ voices contributes to a hostile online environment and limits the ability of LGBTQ individuals to connect, organize, and express themselves freely. Platforms must address this double standard by investing in more nuanced and culturally competent content moderation systems that can distinguish between hate speech and legitimate expression. They must also ensure greater transparency and accountability in their moderation practices to prevent further marginalization of LGBTQ communities.