X’s Community Notes: A Failing Grade in South Asian Languages
X (formerly Twitter), boasts Community Notes as its flagship crowdsourced fact-checking initiative, promising real-time context for potentially misleading posts. However, a comprehensive study by the Center for the Study of Organized Hate (CSOH) reveals significant shortcomings in the system, particularly regarding its efficacy in South Asian languages. Despite representing a quarter of the global population and 5% of X’s user base, contributions in Hindi, Urdu, Bengali, Tamil, and Nepali comprise a negligible fraction (0.094%) of the total 1.85 million archived notes. Even more alarming, only 37 of these notes ever reached public visibility. This disparity exposes a critical vulnerability in the platform’s fact-checking mechanism, leaving a vast segment of users disproportionately exposed to misinformation.
The mechanics of Community Notes rely on a two-step process: a “helpfulness” upvote and a “bridging test” requiring agreement from users with differing viewpoints. This theoretical framework aims to transcend echo chambers and establish consensus on factual accuracy. However, the system falters in practice due to insufficient contributor activity in specific languages, preventing the achievement of the necessary consensus thresholds. The CSOH study highlights that while South Asian language notes receive comparable or even higher “helpfulness” upvotes compared to English notes, fewer than 40% pass the bridging test, significantly lower than the 65% success rate observed for other languages. This bottleneck stems directly from the scarcity of reviewers fluent in these languages and representing diverse perspectives, leaving accurate notes languishing in draft form while misinformation proliferates.
The consequences of this disparity are particularly concerning in South Asia, a region identified by organizations like the World Economic Forum as highly susceptible to misinformation. The dearth of published Community Notes in South Asian languages exacerbates existing linguistic inequalities and leaves these communities with inadequate protection against misleading content. The report’s analysis of Community Notes activity in South Asian languages from April 2024 to April 2025 reveals a telling pattern. Note-writing activity remained largely stagnant until the Indian general election period (April-June 2024), when a twentyfold surge in weekly volume occurred. This spike underscores two key issues: the system’s capacity to be mobilized during crises and its inherent inability to scale effectively during periods of heightened need, resulting in a backlog of draft notes precisely when timely context is most crucial for informed decision-making.
Beyond the issue of scarcity lies a more insidious problem: the potential for weaponizing Community Notes. While many draft notes fail to be published, some reveal troubling intentions, including the use of ethnic slurs and derogatory language targeting specific political groups. The presence of such submissions exposes not only a misunderstanding of the feature’s purpose but also a deliberate attempt to exploit it for malicious ends. The absence of a dedicated moderation layer for Community Notes, relying solely on crowdsourced ratings – which are particularly sparse in these vulnerable linguistic contexts – creates a dangerous loophole. This allows hateful rhetoric and partisan attacks to potentially gain traction due to X’s algorithm favoring diverse perspectives, thereby ironically amplifying the very type of content Community Notes was designed to combat.
To address these critical shortcomings, the CSOH report proposes actionable steps for X and other platforms like Meta, which are piloting similar systems. Recognizing the shared design vulnerabilities – reviewer scarcity, lack of civility screening, and inadequate response during peak activity – the report recommends three key measures: proactive multilingual reviewer recruitment year-round, adjustment of publication thresholds to reflect linguistic realities, and implementation of automated civility filters at the point of submission. By learning from X’s missteps, Meta can avoid replicating these failures, thereby protecting vulnerable users and preventing the perpetuation of existing inequities.
The study underscores the urgency for X to prioritize fundamental improvements before expanding the system further, such as the recent pilot program introducing AI-generated notes. Instead of automating a feature already struggling in English and demonstrably failing in South Asian languages, the focus should be on expanding and diversifying the reviewer pool, adapting the publication algorithm to recognize the realities of smaller language groups, and establishing basic civility filters to prevent the spread of harmful content. Ultimately, platforms must shift their focus from simply counting notes to ensuring equitable coverage across all languages, recognizing that a system’s true measure of success lies in its capacity to provide equal protection against misinformation regardless of language. Building trust and safety across the global digital landscape requires a commitment to linguistic equity, ensuring that all users, regardless of their language, have access to reliable information and are shielded from the damaging effects of misinformation.