US Sanctions UN Rapporteur, Sparking Online Debate and Scrutiny of X’s Community Notes
On July 9, 2023, the US government imposed sanctions on Francesca Albanese, a UN Human Rights Council special rapporteur, accusing her of engaging in a “campaign of political and economic warfare” against the United States. The move, spearheaded by US Secretary of State Marco Rubio, stemmed from Albanese’s consistent criticism of Israel’s actions in Gaza following its offensive against Hamas in October 2023, as well as her condemnation of the Trump administration’s efforts to suppress dissenting voices critical of Israel. The UN immediately rejected the sanctions, demanding their reversal, while the controversy ignited a heated debate on social media platform X (formerly Twitter), where Albanese’s name trended prominently.
The online discourse saw a flurry of posts defending and criticizing Albanese’s work, often accompanied by “Community Notes,” X’s crowdsourced fact-checking tool. These notes, essentially brief clarifications or contextual additions to posts, can be submitted by any user. X claims its “bridging algorithm” mitigates bias by prioritizing upvotes from users with differing viewpoints, theoretically preventing narrative dominance by any single group. However, the system’s fallibility was highlighted by a misleading community note claiming that Albanese “is not a lawyer.” While technically true in the sense that she hadn’t taken the bar exam, Albanese had studied law and is described as an “international lawyer” by the UN Office of the High Commissioner for Human Rights. This incident underscored that community notes, while potentially valuable in combating disinformation, can be inaccurate and present incomplete narratives.
Community notes, intended as a collaborative fact-verification system, have demonstrated some success in mitigating misinformation spread. Research from Cornell University indicated that notes on inaccurate posts on X often lead to decreased reposts and increased likelihood of post deletion by the original author. However, an NBC News analysis revealed a decline in published community notes. Moreover, DW Fact Check identified multiple instances where the tool misled users instead of clarifying facts. One prominent example involved a Sky News post about structural inequality in London. A community note, while factually correct, reframed the discussion by introducing unrelated crime statistics, shifting focus from systemic issues to portray black boys as perpetrators. This distortion of context highlighted the notes’ potential to manipulate public perception.
The 2024 US Presidential elections further exposed the shortcomings of community notes. A Poynter Institute analysis showed that only 29% of fact-checkable posts carried a “helpful” community note, and of those, only 67% actually addressed fact-checkable content. This low precision and recall – few misleading posts being corrected, and many notes not targeting actual misinformation – raised concerns about the tool’s effectiveness. Meanwhile, research from the Alexander von Humboldt Institut für Internet und Gesellschaft in Germany found evidence of political bias in community notes, suggesting that users’ personal viewpoints might influence their assessments. Experts also note that the system is vulnerable to manipulation by “bad actors” who create new accounts to upvote specific viewpoints and promote their preferred narratives.
The speed of community note publication poses another challenge. While the average publication time has decreased significantly, it remains too slow to effectively counter the rapid spread of disinformation. A Bloomberg analysis during the 2023 Israel-Hamas war showed that relevant community notes often took hours, or even days, to appear. In fast-moving news cycles, this lag can render the notes ineffective in preventing the initial dissemination of false information. Despite a reported increase in contributing users, the declining number of published notes raises questions about X’s strategy. Some experts view X’s approach as a façade, arguing that the platform’s focus on increasing user numbers masks a decline in actual fact-checking effectiveness.
As platforms like Meta and TikTok adopt similar community-based fact-checking models, the potential risks amplify. These systems rely heavily on users’ ability to discern factual information, a skillset often lacking in a media-saturated world. Without robust safeguards and meticulous moderation, these platforms risk becoming amplifiers of misinformation rather than effective countermeasures. The reliance on crowdsourced fact-checking raises critical questions about the preparedness of ordinary users to accurately assess complex claims, particularly in a climate of rampant misinformation. This underscores the need for more stringent safeguards and enhanced media literacy to ensure that these tools truly contribute to a more informed digital landscape, rather than exacerbating existing challenges.