TikTok Bolsters Efforts to Combat Misinformation and Enhance Platform Trust
In an era where misinformation fuels social discord and polarization, TikTok, the popular global social media platform, is intensifying its efforts to address this critical issue. The company is implementing a multi-pronged approach, combining institutional and technological strategies to combat the spread of false information, ranging from political conspiracy theories to content generated by artificial intelligence. This commitment to platform integrity was underscored during a recent visit to TikTok’s Transparency and Accountability Centre (TAC) in Singapore, the sole facility of its kind in the Asia-Pacific region. The TAC serves as a hub for transparently sharing information about content moderation practices, recommendation algorithms, and platform security measures. TikTok officials emphasized the company’s dedication to preventing the dissemination of misinformation and fostering user trust.
TikTok maintains a firm stance against misinformation that poses significant harm to individuals or society. The platform adheres to a comprehensive set of community guidelines that define and prohibit various forms of harmful content. This includes misinformation that jeopardizes public safety, health-related falsehoods that could endanger lives, and conspiracy theories that promote violence, hatred, or target individuals. Content moderation decisions are based on these guidelines, ensuring consistency and clarity in enforcement.
Political neutrality is a key priority for TikTok. The platform actively works to prevent the spread of biased or misleading information designed to support specific political candidates. Such content is deemed a violation of the platform’s misinformation policy. Furthermore, attacks against candidates, including personal attacks, bullying, or harassment, are also strictly prohibited under the "bullying and harassment" policy. This commitment to neutrality ensures a fair and balanced online environment during elections and other politically sensitive periods.
To maintain relevance and address emerging challenges, TikTok’s community guidelines are regularly updated. This dynamic approach allows the platform to quickly respond to evolving trends in misinformation and online manipulation. For instance, in light of major elections, such as the US presidential election, the guidelines were refined to address election-related misinformation and manipulation attempts. This proactive approach ensures that the platform’s policies remain current and effective in combating emerging threats.
TikTok leverages external expertise to inform its content moderation policies. The platform consults with ten external advisory committees, including the Northeast Asia Safety Advisory Committee, to gather insights and recommendations. Furthermore, TikTok collaborates with over 20 fact-checking organizations worldwide, supporting over 60 languages. This global network of fact-checkers assists in evaluating complex cases where internal judgment may be insufficient, ensuring a nuanced and informed approach to content moderation.
Recognizing the growing concern surrounding generative AI, TikTok has proactively established guidelines for AI-generated content. The platform prohibits content that impersonates authoritative sources, such as fabricated news reports, or depicts forged public figures in crisis situations or specific contexts. To enhance transparency and prevent the spread of synthetic media, TikTok automatically labels AI-generated content with watermarks. This labeling applies to content created both on TikTok and other platforms, thanks to a partnership with the Coalition for Content Provenance and Authenticity (C2PA). This proactive approach helps users distinguish between authentic and AI-generated content.
TikTok employs machine learning technology to proactively identify and remove harmful content, including violent or graphic material. This automated content moderation system analyzes various parameters, such as objects and human postures, to assess potential risks. For example, the system might flag a video showing a steak knife held upwards as potentially harmful, while the same knife used downwards in a food preparation context would not raise concern. This nuanced approach allows the system to identify potentially harmful content before it reaches a wider audience.
Human review plays a crucial role in TikTok’s content moderation process. Following the initial machine learning analysis, professional moderators review flagged content against the community guidelines. These moderators also consider contextual factors, such as local cultural norms, customs, and laws, to ensure culturally sensitive content moderation. With a global team of tens of thousands of moderators across 80 offices operating 24/7, TikTok maintains a robust and responsive content moderation system. Further review and final decisions are made in consultation with external experts and non-governmental organizations (NGOs) for added oversight and expertise.
This multi-layered approach to content moderation has proven highly effective. According to TikTok, approximately 153 million videos were removed in the fourth quarter of last year alone. Impressively, 98.5% of these removals were proactive, occurring before any user reports or external complaints were received. Furthermore, 90.8% of the removed videos were deleted within 24 hours of being uploaded, and 83.2% were removed before receiving any views. These statistics demonstrate TikTok’s commitment to swiftly and effectively removing harmful content from its platform.
TikTok’s comprehensive approach to combatting misinformation and harmful content demonstrates the platform’s dedication to fostering a safe and trustworthy online environment. By combining technological advancements, robust community guidelines, expert consultation, and a proactive moderation strategy, TikTok is actively working to mitigate the spread of misinformation and ensure the platform remains a positive and engaging space for its users. The company’s ongoing efforts to refine its policies and technologies reflect its commitment to addressing the evolving challenges of online content moderation.