Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Dissemination of Misinformation Regarding the Israeli-Iranian Conflict and Other Recent Events

June 20, 2025

Dissemination of Misinformation by Right-Wing Influencers Following the Death of Hortman

June 20, 2025

Karnataka Cabinet Introduces Bill to Combat Misinformation and Fake News

June 20, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»Combating Misinformation on Social Media through Tiered Anonymity via Individualized Verification (IDV)
News

Combating Misinformation on Social Media through Tiered Anonymity via Individualized Verification (IDV)

Press RoomBy Press RoomJune 19, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Combating Misinformation and Deepfakes: A Call for Tiered Anonymity on Social Media

The pervasive influence of social media on modern society continues to spark debate and discussion, particularly regarding its potential harms. Governments worldwide are grappling with the challenges posed by misinformation, deepfakes, and the impact of social media on children, leading to considerations of age restrictions and privacy concerns related to age verification. Amidst this complex landscape, a new research paper from the University of Cambridge offers a potential roadmap for policymakers, proposing a tiered anonymity framework for social media platforms to address the growing threat of misinformation amplified by generative AI and deepfakes.

Authored by David Khachaturov, Roxanne Schnyder, and Robert Mullins from the Department of Computer Science and Technology and the Institute of Criminology at the University of Cambridge, the paper, currently available on arxiv, suggests a three-tiered system based on a user’s "reach score" or influence. This framework aims to balance the protection of online privacy with the need for accountability and the mitigation of harmful content.

The first tier, designed for users with limited reach, would maintain full pseudonymity, preserving the privacy of everyday interactions. The second tier, targeting accounts with moderate influence, would require private identity verification, reintroducing a level of real-world accountability for users whose posts have a wider impact. The third tier focuses on accounts with significant reach, those traditionally considered sources of mass information. This tier mandates independent, machine learning-assisted fact-checking and review for each post, ensuring a higher level of scrutiny for information disseminated to large audiences.

Recognizing the unlikelihood of voluntary adoption by social media platforms, the authors propose a regulatory pathway drawing upon existing legal frameworks in the U.S., EU, and UK. They argue that while online anonymity initially served as a shield for individuals, the algorithmic amplification inherent in social media has transformed individual posts into potential vectors of misinformation with the reach and influence comparable to traditional broadcast media. This shift necessitates a reevaluation of the balance between anonymity and accountability. The authors conclude that "identity obligations should scale with influence," aligning responsibility with the potential impact of online speech.

The European Union, with its Digital Services Act (DSA), provides a strong foundation for implementing a tiered identity verification system, according to the researchers. The DSA’s Know Your Business (KYB) requirement, which mandates identity verification for commercial users, is cited as a conceptual precedent for linking platform functionality to user transparency. Similarly, the UK’s Online Safety Act 2023 and its accompanying regulations offer a layered reputational infrastructure while preserving the right to anonymity, creating a framework conducive to a tiered identity regime.

The U.S., with its robust First Amendment protections, presents a more challenging regulatory landscape. However, the authors suggest the possibility of indirect, incentive-based mechanisms for accountability, drawing on bipartisan legislative proposals at the federal level. These mechanisms could encourage platforms to adopt tiered anonymity frameworks without directly mandating them.

The core principle of the proposed framework is to calibrate anonymity to communicative reach. By implementing a tiered system, the authors argue, social media platforms can reintroduce a degree of friction that has been eroded by recommender systems. This friction, in the form of increased accountability and scrutiny for high-reach accounts, could help to curb the spread of misinformation and deepfakes while still protecting the privacy of ordinary users. The full paper, titled "Governments Should Mandate Tiered Anonymity On Social-Media Platforms to Counter Deepfakes and LLM-Driven Mass Misinformation," is publicly available for further examination. This research offers valuable insight into the ongoing debate surrounding social media regulation, providing a potential pathway for policymakers to address the complex challenges of online misinformation and preserve the benefits of online communication.

The researchers believe that implementing this tiered anonymity system would introduce a much-needed recalibration of the current social media landscape. By reintroducing friction, they contend that the virality and rapid spread of misinformation, often facilitated by sophisticated AI-generated deepfakes, can be significantly mitigated. The increasing prevalence of deepfakes, capable of convincingly fabricating audio and video content, presents a serious threat to public trust and democratic processes. The proposed framework aims to combat this threat by requiring greater transparency and accountability from those with the most influence on these platforms.

While the specific implementation details would require careful consideration and collaboration between governments, platforms, and experts, the core principles of tiered anonymity based on reach provide a solid starting point for addressing the challenges of online misinformation. This layered approach allows for a nuanced response that balances the need for free expression with the imperative to protect against harmful content. For users with limited reach, the right to anonymity remains largely intact, safeguarding their privacy in everyday online interactions. As reach increases, so does the level of accountability, culminating in rigorous fact-checking for those with the potential to disseminate information to vast audiences.

The framework proposed by the Cambridge researchers also acknowledges the varied legal and regulatory contexts across different jurisdictions. While the EU and UK appear to offer more readily adaptable legal frameworks for implementing tiered anonymity, the U.S. presents unique challenges due to its strong emphasis on free speech. However, the authors suggest that even within the U.S. context, indirect mechanisms, such as incentivizing platform adoption of verification and accountability measures, could be explored. This adaptability makes the framework potentially relevant and applicable across different legal and political systems.

It remains to be seen how this framework will be received by social media companies. The introduction of tiered anonymity with accompanying verification and fact-checking requirements would likely represent a significant change in their current operational models. There may be concerns about cost, logistical challenges, and potential pushback from users resistant to increased scrutiny. However, the researchers argue that the potential benefits in terms of combating misinformation and restoring public trust outweigh these challenges.

Ultimately, the success of this proposed framework depends on collaborative efforts between governments, platforms, and civil society. Open dialogue and careful consideration of the practical implications of its implementation are crucial. While the challenges posed by misinformation and deepfakes are complex and ever-evolving, the research from the University of Cambridge offers a promising starting point for building a more responsible and trustworthy online environment. This tiered anonymity framework could be a significant step towards addressing the escalating threats posed by misinformation while preserving the fundamental rights of online users.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Dissemination of Misinformation Regarding the Israeli-Iranian Conflict and Other Recent Events

June 20, 2025

Media Survival Tactics Contribute to Misinformation Dissemination

June 20, 2025

Unsupported Browser

June 20, 2025

Our Picks

Dissemination of Misinformation by Right-Wing Influencers Following the Death of Hortman

June 20, 2025

Karnataka Cabinet Introduces Bill to Combat Misinformation and Fake News

June 20, 2025

McQuade and Tucker Address the Threat of Disinformation to Democracy in Fairhope.

June 20, 2025

Social Media’s Influence on the Restaurant Industry

June 20, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

NHK President Addresses Taiga Drama, Misinformation, and Shogunate-Era Productions

By Press RoomJune 20, 20250

NHK: A Century of Public Broadcasting, Navigating the Digital Age Japan’s NHK, a cornerstone of…

The Flawed Logic of the “Propaganda I’m Not Falling For” Viral Trend

June 20, 2025

Media Survival Tactics Contribute to Misinformation Dissemination

June 20, 2025

Effects of Tariffs on U.S. Social Advertising Expenditures

June 20, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.