TikTok’s Climate Misinformation Problem: A Deep Dive into COP29 Content Moderation
The annual Conference of the Parties (COP), a critical summit addressing the global climate crisis, serves as a focal point for disseminating accurate climate information. Social media platforms play a significant role in this information landscape, and TikTok, with its explicit ban on climate misinformation, stands out. However, an investigation into comments on COP29-related videos reveals a troubling disconnect between policy and practice. Despite TikTok’s stated commitment to tackling climate misinformation, climate denial thrives in plain sight, raising serious concerns about the platform’s content moderation effectiveness.
The investigation focused on comments posted on COP29 videos shared by major news organizations on TikTok. Twenty comments explicitly denying climate change or its human-caused origins were reported through TikTok’s internal reporting system. These comments, found on videos with a collective viewership exceeding 3 million, directly contradicted TikTok’s community guidelines, which prohibit misinformation undermining established scientific consensus on climate change. Shockingly, only one out of the twenty reported comments was initially removed for violating these guidelines. The majority of reports received a dismissive response stating no violation had occurred, highlighting a significant failure in TikTok’s enforcement mechanisms.
This failure is particularly alarming considering TikTok’s growing role as a news source. The presence of climate denial comments on prominent news channels’ videos, often appearing in top search results for COP29, exposes a vast audience to misleading information. Research indicates that a significant portion of TikTok users actively engage with comments, further amplifying the reach and potential impact of this misinformation. The timing of these findings, coinciding with a highly publicized event like COP29, where individuals actively seek climate information online, exacerbates the risks.
While TikTok deserves credit for acknowledging the threat of climate misinformation and implementing reporting mechanisms, the investigation’s findings expose a critical gap between policy and enforcement. Despite symbolic gestures, such as the "#COP29 For Climate Action" banner linking to the UNFCCC’s official page, the platform’s moderation system appears ill-equipped to handle the influx of climate denial. This inadequacy is further underscored by reports of impending layoffs within TikTok’s content moderation teams, including those in the UK. This raises concerns about the prioritization of content moderation and the potential for further erosion of safeguards against misinformation.
Insider perspectives shed light on potential reasons behind these shortcomings. A current TikTok content moderator, speaking anonymously, attributed the rise in misinformation and hate speech to an increasing reliance on automated and outsourced moderation. This shift, away from experienced human moderators, suggests a trade-off between efficiency and accuracy, potentially jeopardizing the platform’s ability to effectively combat harmful content. John Chadfield, a national officer for the Communication Workers Union, echoed these concerns, emphasizing the importance of human oversight in content moderation. He argued that automated systems should complement, not replace, human moderators, and called for increased investment in human resources to address the growing challenges of online content moderation.
The investigation’s findings prompted a call for action directed at TikTok. The platform was urged to investigate the failures in its content moderation system, specifically regarding the inaction on reported climate denial. Furthermore, TikTok was urged to prioritize information integrity by adequately resourcing its content moderation efforts globally, ensuring fair wages for moderators, supporting unionization, and providing essential psychological support. Robust enforcement of mis- and disinformation policies, for both organic and paid content, was also emphasized as crucial for safeguarding against the harmful effects of misinformation.
In response to the investigation, TikTok reiterated its commitment to combating climate misinformation through its Community Guidelines. The platform highlighted its use of both human and automated moderation, along with substantial investment in trust and safety measures. TikTok also claimed a high proactive detection rate for policy-violating content, asserting that the vast majority of such videos are removed before user reports. Initially, TikTok claimed to have removed all twenty reported comments. However, subsequent checks revealed that one comment remained, even after TikTok’s assurance of its removal. Further engagement with TikTok led to the removal of the remaining comment and additional violative comments on other videos, along with a promise of continued monitoring. TikTok also pointed to partnerships with creators and figures like Neil deGrasse Tyson to promote climate conversations. Ironically, even Tyson’s COP29 video contained unremoved climate denial comments, despite endorsement from TikTok For Good. This further emphasizes the inconsistency and ineffectiveness of current moderation practices. This ongoing battle against misinformation highlights the urgent need for increased transparency, accountability, and robust content moderation strategies to safeguard the integrity of information on platforms like TikTok.