Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Sunscreen Use Among Generation Z Remains Low Amid Social Media Misinformation

July 2, 2025

EU Disinformation Code Implemented Amidst Concerns Over Censorship and Trade Implications

July 2, 2025

AI-Generated Videos Propagating Misinformation in Diddy Trial, Researchers Claim

July 2, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Fake Information»Supersharers Disseminated 80% of Misinformation on Social Media in 2020
Fake Information

Supersharers Disseminated 80% of Misinformation on Social Media in 2020

Press RoomBy Press RoomDecember 20, 2024
Facebook Twitter Pinterest LinkedIn Tumblr Email

The Disproportionate Impact of Misinformation Supersharers on Social Media

The spread of misinformation on social media platforms has become a significant concern in recent years, especially regarding its potential to influence public opinion and behavior. Two recent studies published in the journal Science shed light on this phenomenon, revealing not only the effectiveness of misinformation in altering people’s attitudes but also the surprising role played by a small group of dedicated “supersharers” in disseminating false information. These studies, conducted independently by researchers at MIT, Ben-Gurion University, Cambridge, and Northeastern, offer complementary insights into the dynamics of misinformation propagation.

The first study, led by MIT researcher Jennifer Allen, focused on the impact of vaccine misinformation during the period of 2021 and 2022. The researchers analyzed a vast dataset from social media, acknowledging the challenges posed by the sheer volume and complexity of online information. Their findings confirmed that exposure to misinformation, particularly content claiming negative health effects from vaccines, significantly reduced individuals’ intent to get vaccinated. Interestingly, articles flagged by platform moderators as misinformation had a stronger negative impact on vaccine hesitancy compared to non-flagged content.

However, the study also revealed a critical caveat: the volume of unflagged misinformation dwarfed the volume of flagged content. While individual pieces of flagged misinformation had a greater impact, the sheer quantity of unflagged misinformation ultimately exerted a more substantial influence on public perception. This “gray area” content, often misleading information from seemingly reputable sources, reached far larger audiences than overtly false content. A prime example cited was a misleading Chicago Tribune headline about a doctor’s death after receiving a COVID-19 vaccine, which garnered millions of views despite the lack of evidence linking the death to the vaccine. This underscores the importance of addressing not only blatant falsehoods but also the subtle and often more pervasive forms of misinformation.

The second study, conducted by a multi-university research team, delved into the actors responsible for spreading false information during the 2020 US election. Their analysis of Twitter data from over 660,000 registered voters led to a startling discovery: a mere 2,107 users were responsible for disseminating 80% of the "fake news" during the election period. These "supersharers," predominantly older, white, Republican women, wielded a disproportionate influence on the online information landscape.

These supersharers were not bots or foreign agents but real individuals who actively and persistently retweeted misleading information. Their extensive reach, amplified by algorithms, exposed a significant portion of the electorate to false narratives. While the researchers couldn’t definitively rule out the possibility of some coordinated activity, the patterns of activity suggested genuine user engagement rather than automated bot behavior. The study highlighted the vulnerability of social media platforms to manipulation by small groups of dedicated individuals.

The demographics of these supersharers, while revealing, should not obscure the broader issue of misinformation spreading across the political spectrum. While the majority of supersharers in the study fit a specific profile, it’s crucial to remember that misinformation is not solely a problem of one demographic group. The Chicago Tribune headline example illustrates how misleading information from mainstream sources can reach vast audiences regardless of political affiliation. The supersharers’ disproportionate impact underscores the need for strategies to address the concentrated spread of misinformation while also tackling the broader problem of misleading content from various sources.

The convergence of these two studies highlights a critical challenge for online platforms and democratic societies. While efforts to flag and remove overtly false content are necessary, they are insufficient to address the larger problem of misleading information, particularly when amplified by supersharers. The outsized influence of these individuals, combined with the algorithmic amplification of their messages, can distort public perception and undermine trust in reliable sources of information. This raises fundamental questions about the role of social media in shaping public discourse and the need for greater transparency and accountability from platforms.

The implications of these findings extend beyond vaccine hesitancy and election interference. The spread of misinformation poses a threat to informed decision-making on a wide range of issues, from public health to climate change. Addressing this challenge requires a multi-pronged approach that includes improving media literacy, promoting critical thinking, holding platforms accountable for the content they host, and developing strategies to counter the influence of supersharers. The long-term health of democratic societies depends on the ability to navigate the complex information landscape and ensure that informed debate, not misinformation, shapes public discourse.

The findings of these studies should not be interpreted as a condemnation of any specific demographic group. Rather, they highlight the susceptibility of social media platforms to manipulation by dedicated individuals and the urgent need for strategies to counter the spread of misinformation. While the demographics of supersharers in the election study are noteworthy, the problem of misinformation transcends partisan divides. The Chicago Tribune headline example underscores the potential for misleading information from mainstream sources to reach wide audiences regardless of political affiliation. The focus should be on developing effective solutions to counter the spread of misinformation from all sources and ensure a more informed and resilient public discourse.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Discerning Factual Information from Misinformation on Social Media During Conflict

June 30, 2025

Combating Misinformation

June 29, 2025

Disinformation and Fabricated Abductions: Exploring the Dangers in the Pacific Islands

June 29, 2025

Our Picks

EU Disinformation Code Implemented Amidst Concerns Over Censorship and Trade Implications

July 2, 2025

AI-Generated Videos Propagating Misinformation in Diddy Trial, Researchers Claim

July 2, 2025

Azerbaijani Parliamentary Commission Condemns Disinformation Campaign

July 2, 2025

Effective Risk Communication and Community Engagement Strategies to Counter Vaccine Misinformation.

July 1, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Countering Vaccine Misinformation: A Guide for Expanded Programme on Immunization (EPI) Managers

By Press RoomJuly 1, 20250

The Escalating Threat of Vaccine Misinformation: A Guide for EPI Managers in Protecting Public Health…

AI-Generated Disinformation Pervades Social Media Amid Sean Combs’ Sex Trafficking Trial

July 1, 2025

EU Disinformation Code Implemented Amidst Censorship Concerns and Trade Disputes

July 1, 2025

AI Chatbots Vulnerable to Generating False Health Information, Study Reveals

July 1, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.