Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Cross-Border Collaboration to Combat the Spread of Medical Disinformation

August 11, 2025

White House Addresses Misinformation Regarding Gold Duties under Trump Tariffs.

August 11, 2025

The Pervasive Influence of AI and Social Media on Adolescents: Assessing the Potential Ramifications.

August 11, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Social Media»Survey: Bipartisan Support for Social Media Removal of False Health Information
Social Media

Survey: Bipartisan Support for Social Media Removal of False Health Information

Press RoomBy Press RoomJuly 17, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Public Health Misinformation on Social Media: A Call for Platform Accountability

A recent survey conducted by the Communication Research Center at Boston University reveals a strong bipartisan consensus among Americans regarding the need for social media platforms to address the spread of inaccurate public health information. An overwhelming 72% of respondents believe platforms should have the authority to remove such content, highlighting a shared concern across the political spectrum. This sentiment transcends party lines, with 85% of Democrats, 70% of Independents, and even a significant majority (61%) of Republicans supporting the removal of false public health information. This finding underscores a growing public awareness of the potential harms of misinformation in the digital age and a desire for greater accountability from social media companies.

The survey further explores public attitudes towards various content moderation strategies. A significant majority (63%) support the involvement of independent fact-checking organizations in verifying the accuracy of social media content related to public health. Similarly, 65% of Americans approve of “downranking,” a practice where platforms reduce the visibility of inaccurate information. These findings suggest a public preference for expert-driven and platform-led solutions to combat misinformation, rather than relying solely on user-generated approaches.

The study’s lead researcher, Professor Michelle Amazeen, emphasizes the urgency of this issue, particularly in light of the increasing politicization of truth and the erosion of trust in traditional information sources. She notes that the “integrity of public discourse is at risk” and criticizes social media companies for abandoning their own fact-checking programs. Amazeen argues that these platforms have a fundamental responsibility to ensure the accuracy and safety of the information shared on their services, and their failure to do so poses a significant threat to public health and democratic processes.

One approach to content moderation that received less public support is the “community notes” model. This system allows users to write and rate notes that appear alongside posts, theoretically providing a crowdsourced mechanism for identifying misinformation. However, less than half (48%) of survey respondents favored this approach. While there are some partisan variations in support, with Democrats showing higher levels of approval compared to Republicans and Independents, the overall lukewarm reception suggests skepticism about the effectiveness of user-driven fact-checking. Amazeen notes the “sobering” reality of community notes programs, pointing out that platforms using this model continue to be plagued by misinformation.

The survey also gauged public willingness to financially support independent fact-checking initiatives. While a modest 32% indicated they would be willing to donate even a small amount ($1) to such efforts, a combined 36% actively disagreed or strongly disagreed. This result highlights the challenges in securing public funding for vital fact-checking programs and underscores the need for sustainable funding models to support these crucial services. The lack of overwhelming financial support further emphasizes the responsibility of social media companies to invest in robust content moderation systems.

The Boston University survey provides crucial insights into public opinion on content moderation strategies and highlights the complex interplay between social media, misinformation, and public health. The strong bipartisan support for platform intervention, coupled with skepticism about user-generated approaches, suggests a clear mandate for social media companies to take a more proactive role in mitigating the spread of inaccurate health information. The survey also raises concerns about the potential for political manipulation of online platforms and the need for strong accountability mechanisms, particularly as new administrations with varying track records on truthfulness assume office. The findings underscore the critical importance of ongoing research and public discourse on these issues to ensure the integrity of online information and protect public health in the digital age. Amazeen’s critique of shifting content moderation responsibilities to users, framing it as a way for platforms to shirk their duty, resonates with the survey results and underlines the need for platforms to take ownership of the information ecosystem they create and maintain. The study further exposes the financial vulnerability of independent fact-checking endeavors, calling for increased investment and support for these essential services. Ultimately, the survey paints a complex picture of public expectations, platform responsibilities, and the ongoing battle against misinformation in an increasingly polarized and digital world.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Cross-Border Collaboration to Combat the Spread of Medical Disinformation

August 11, 2025

Critical Technological Takeaways from the Romanian Election: Imperative Lessons for the European Union

August 10, 2025

Algorithmic Bias, Colonial Tropes, and the Propagation of Misinformation: A Moral Geography.

August 10, 2025

Our Picks

White House Addresses Misinformation Regarding Gold Duties under Trump Tariffs.

August 11, 2025

The Pervasive Influence of AI and Social Media on Adolescents: Assessing the Potential Ramifications.

August 11, 2025

Union Demands CDC Address Misinformation Linking COVID-19 Vaccine to Depression Following Shooting

August 11, 2025

Disinformation and Conflict: Examining Genocide Claims, Peace Enforcement, and Proxy Regions from Georgia to Ukraine

August 11, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Intel CEO Refutes Former President Trump’s Inaccurate Claims

By Press RoomAugust 11, 20250

Chipzilla CEO Lip-Bu Tan Rejects Trump’s Conflict of Interest Accusations Amidst Scrutiny of China Ties…

CDC Union Urges Trump Administration to Denounce Vaccine Misinformation

August 11, 2025

Misinformation Regarding the Anaconda Shooting Proliferated on Social Media

August 11, 2025

Combating Disinformation in Elections: Protecting Voter Rights

August 11, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.