Americans Favor Experts, But Well-Designed Layperson Juries Can Earn Trust for Online Fact-Checking
In the ongoing battle against misinformation online, a new study reveals that Americans largely trust expert panels to determine the veracity of online content. However, the research also highlights that carefully structured juries of everyday citizens can achieve nearly comparable levels of trust, offering a potential alternative or complement to expert-driven fact-checking. This finding comes at a pivotal moment as social media platforms grapple with evolving content moderation strategies, moving away from reliance on professional fact-checkers and exploring alternative approaches like community-based systems.
Published in PNAS Nexus, the study, conducted by researchers from MIT, the University of Washington, and the University of Michigan, underscores the importance of public perception in the fight against online misinformation. Researchers surveyed 3,000 Americans, presenting them with various content moderation scenarios involving different types of juries, ranging from expert panels to random users and even algorithms. The participants were asked to evaluate the legitimacy of each jury’s decision, even if they personally disagreed with the outcome. This focus on perceived legitimacy, even in cases of disagreement, highlights the researchers’ interest in identifying systems that can foster trust and acceptance across diverse viewpoints.
Unsurprisingly, expert panels comprising specialists, fact-checkers, and journalists garnered the highest legitimacy ratings. However, the study also revealed that certain structural features could significantly enhance the perceived legitimacy of layperson juries. Increasing jury size, implementing knowledge qualifications for jurors, and allowing for deliberation among jury members all contributed to higher trust levels. Notably, when these conditions were met – larger size, qualified members, and opportunity for discussion – layperson juries, especially those that were representative and politically balanced, achieved legitimacy ratings approaching those of the expert panels.
The research also explored the influence of political identity on trust in content moderation. While both Republicans and Democrats viewed experts as more trustworthy than other options, Republicans expressed lower levels of confidence in experts compared to Democrats. However, the study consistently demonstrated that regardless of political affiliation, Americans generally rejected decisions made by algorithms, social media CEOs, or random chance, suggesting a clear preference for human judgment in content moderation. This public skepticism towards automated and arbitrary moderation approaches underscores the need for platforms to prioritize transparency and human oversight in their content moderation systems.
These findings offer valuable insights for social media platforms navigating the complex landscape of online content moderation. Rather than a binary choice between expert moderation and a laissez-faire approach, the research suggests that hybrid models, combining expert credibility with well-designed community input systems, may be the most effective path forward. Layperson juries, when structured appropriately, could provide a valuable complement to expert panels, offering broader representation and potentially mitigating concerns about bias or censorship.
While the study focused specifically on hypothetical scenarios and did not involve actual content moderation decisions on specific pieces of content, its findings have significant implications for the fight against misinformation. They suggest that public trust is not solely reliant on expertise but can also be earned by well-designed community-based systems that incorporate elements of transparency, deliberation, and representativeness.
The research underscores the importance of legitimacy in content moderation. Even when individuals disagree with a specific decision, their trust in the overall system is crucial for its effectiveness. This emphasis on procedural fairness, rather than just outcome agreement, highlights the need for platforms to create moderation processes that are perceived as fair and equitable.
Moreover, the study suggests that social media platforms may be misjudging public sentiment by increasingly relying on AI-driven moderation. While automation can play a role in identifying and flagging potentially harmful content, the public’s preference for human judgment suggests that algorithmic decisions alone are unlikely to be viewed as legitimate.
The study’s limitations include its focus on hypothetical scenarios and its specific focus on the US context. Further research is needed to explore how these findings translate into real-world content moderation decisions and whether similar patterns of trust emerge in other cultural contexts.
Despite these limitations, the study provides valuable insights into public perceptions of content moderation. It emphasizes the importance of both expertise and procedural fairness in building trust and suggests that well-designed community-based systems can play a significant role in the fight against misinformation online. As social media platforms grapple with the challenges of content moderation, these findings offer a path forward for building more legitimate and effective systems for separating fact from fiction in the digital age.