The Outrage Engine: How Misinformation Thrives on Social Media
A groundbreaking study published in Science reveals a disturbing truth about the spread of misinformation online: it’s not just about accuracy, it’s about outrage. The research, conducted by a team of academics including Killian L. McLoughlin, William J. Brady, Aden Goolsbee, Ben Kaiser, Kate Klonick, and M.J. Crockett, demonstrates that misinformation thrives on social media not because people are unaware of its falsehood, but because it effectively exploits moral outrage – a potent blend of anger and disgust triggered by perceived moral transgressions. This finding challenges the core assumption behind many fact-checking initiatives, which presume users prioritize truth and accuracy when sharing information. The study suggests a more complex reality: the emotional appeal of outrage often trumps the desire for accuracy, fueling the spread of misinformation even when users are aware of its dubious nature.
The researchers investigated the interplay between misinformation and outrage across three key questions: Does misinformation provoke more outrage than trustworthy news? Does outrage increase the spread of misinformation? And does outrage influence the psychological motivations behind sharing misinformation? To answer these questions, the team analyzed vast datasets from Facebook and Twitter, encompassing data from 2017 and 2020-2021. They also conducted two controlled behavioral experiments using fact-checked headlines to gauge participants’ sharing tendencies. The Facebook and Twitter data focused on user engagement with posts containing web links, classifying sources as either misinformation or trustworthy based on their reputation. This method was chosen over individual article fact-checking due to scalability and reduced selection bias. In the behavioral experiments, American participants evaluated headlines based on trustworthiness and outrage levels, then indicated their likelihood of sharing.
The study’s findings were striking. Across platforms and time periods, misinformation consistently evoked significantly more outrage than trustworthy news. On Facebook, misinformation sparked more anger than any other emotion, solidifying the link between misinformation and outrage. On Twitter, responses to misinformation posts exhibited a heightened presence of outrage. This pattern held true regardless of the time period studied. Further strengthening the link, the research demonstrated that outrage significantly predicts the spread of misinformation. Tweets evoking outrage were shared more frequently, irrespective of their veracity, although the effect was often stronger for misinformation. Intriguingly, while outrage consistently boosted sharing, its relationship with news type varied across different Twitter studies. In some cases, the outrage effect was stronger for misinformation, while in others, it was stronger for trustworthy news. This suggests a nuanced interplay between outrage and content, warranting further investigation. The behavioral experiments reinforced these findings: participants were more inclined to share outrage-inducing headlines, regardless of truthfulness, highlighting outrage as a potent driver of online sharing behavior.
The researchers delved further into the psychological drivers behind sharing behavior, distinguishing between epistemic (accuracy-focused) and non-epistemic (emotionally driven) motives. Their findings suggest that outrage fuels non-epistemic sharing. Across Facebook and Twitter, angry reactions correlated with a higher likelihood of sharing posts without reading them, a phenomenon more pronounced for misinformation. This implies that emotions, especially outrage, override accuracy concerns, leading to the spread of unverified or even known false information. The implications of these findings are substantial. Traditional counter-misinformation strategies focused on providing accurate information might be insufficient, as users often share misinformation driven by non-epistemic motives such as signaling political allegiance or moral stances. Outrage-evoking misinformation might be less reputationally risky to share, adding another layer to the challenge. The researchers suggest that interventions targeting these non-epistemic motivations could be more effective.
These results underscore the complex and often counterintuitive dynamics of online information sharing. The researchers suggest that policymakers and social media platforms should re-evaluate their approaches to combating misinformation, shifting focus from solely debunking false claims to addressing the emotional drivers behind sharing. Rather than simply flagging inaccurate content, platforms could experiment with interventions that discourage outrage-driven sharing or promote critical thinking about the emotional content of posts. This could involve highlighting the potential harm of sharing unverified information, or promoting media literacy initiatives that empower users to recognize and resist the manipulation of their emotions. While the study focuses on American users on Facebook and Twitter, the underlying mechanisms of outrage and sharing are likely universal, underlining the need for further research across different demographics and platforms.
The path to these crucial findings was fraught with challenges. Molly Crockett, one of the study’s authors, highlighted the difficulties in accessing social media data, citing lengthy delays and bureaucratic hurdles in obtaining data from Facebook. She also expressed concerns about the deteriorating research environment, with platform policies increasingly restricting access to vital data. This underscores the importance of advocating for open data access for researchers to continue investigating and understanding the complex dynamics of online misinformation. Crockett’s appeal for greater collaboration and integrated approaches among researchers becomes even more critical in this increasingly restrictive environment. The ability to study and address the spread of misinformation relies on access to the very platforms where it proliferates, and researchers face an uphill battle to ensure their work can continue.
The study’s revelations present a significant challenge to social media companies. Given the platforms’ current focus on accuracy-based interventions, incorporating these insights could require a substantial shift in strategy. Moreover, the platforms themselves often benefit from the engagement generated by outrage, even if it stems from misinformation. This creates a conflict of interest, making it unclear whether platforms have the incentive to implement meaningful changes. The current political climate, as described by Crockett, further complicates the situation, with potential repercussions for research funding and platform cooperation. Despite these obstacles, the study provides valuable insight into the mechanics of misinformation, highlighting the need for new and more nuanced approaches to combat its spread. It remains to be seen how social media platforms and policymakers will respond to these findings, but the study serves as a crucial step towards understanding and addressing the complex challenge of online misinformation.