TikTok Embraces Crowdsourced Fact-Checking with ‘Footnotes’ Feature
TikTok has launched a new initiative to combat misinformation on its platform, introducing a crowdsourced fact-checking system called “Footnotes.” This feature, similar to X’s Community Notes and Meta’s version, allows vetted users to contribute contextual information to videos that may contain misleading or inaccurate content. The move signifies a growing trend among social media giants to leverage the collective intelligence of their user base to tackle the pervasive issue of online misinformation.
Footnotes enables selected TikTok users to add written notes to videos, providing additional context or corrections to potentially misleading information. These notes are then subjected to a community rating system, where other users assess their helpfulness. If a footnote receives sufficient positive ratings from a diverse range of users, it becomes publicly visible beneath the corresponding video. TikTok emphasizes that this system will complement existing content moderation efforts, including content labeling and partnerships with professional fact-checking organizations like AFP. The platform has initiated the program with nearly 80,000 qualified US-based users, who have maintained active accounts for at least six months, contributing to a potential pool of insights from a subset of its 170 million US users.
The adoption of crowdsourced fact-checking represents a paradigm shift in content moderation. Traditionally, platforms have relied on internal teams or partnerships with external organizations to identify and address misinformation. However, the sheer volume of content uploaded daily, coupled with the rapid spread of misinformation, has made these methods increasingly challenging to maintain effectively. Crowdsourcing offers a potentially scalable solution by distributing the task of fact-checking across a vast network of users.
The efficacy of crowdsourced fact-checking, however, remains a topic of debate. While proponents argue that it harnesses the wisdom of the crowd and promotes transparency, critics raise concerns about potential biases, manipulation, and the overall effectiveness of such systems. A recent study by the Digital Democracy Institute of the Americas (DDIA) analyzed X’s Community Notes, a similar system, and found that over 90% of submitted notes never reach the public due to a lack of consensus or insufficient ratings. This raises questions about the scalability and impact of such models.
TikTok acknowledges that Footnotes may require time to gain traction and reach its full potential. The platform emphasizes that the system’s effectiveness will improve as more users participate and the algorithm learns from the accumulating data. The long-term goal is to create a self-regulating ecosystem where misinformation is quickly identified and corrected by the community itself.
This shift towards community-based moderation mirrors similar moves by other tech giants. Meta, for instance, discontinued its third-party fact-checking program in the US earlier this year, opting for its own version of community notes. This move was met with both support and criticism, with some lauding the decentralization of fact-checking and others expressing concerns about potential biases and the lack of professional oversight. The decision also sparked a debate about the role of social media platforms in regulating online content and the balance between free speech and the fight against misinformation. While studies have indicated that community notes can be effective in addressing certain types of misinformation, such as vaccine-related falsehoods, researchers caution that its effectiveness hinges on broad consensus on the topic at hand. Furthermore, the potential for partisan motivations and targeted attacks against political opponents within these systems remains a significant concern.
The effectiveness of Footnotes, like other crowdsourced fact-checking initiatives, will depend on several factors, including user participation, the quality of contributed notes, and the platform’s ability to mitigate potential biases and manipulation. As TikTok ventures into this new frontier of content moderation, the long-term impact of this approach on the fight against misinformation remains to be seen. The success of this model will likely influence future content moderation strategies across other platforms, shaping the landscape of online discourse and the ongoing battle against the spread of false information.