Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

NSDC’s Center for Countering Disinformation Identifies Emerging Telegram Campaign to Discredit Territorial Community Centers.

August 5, 2025

Do X’s Community Notes Contribute to the Spread of Misinformation?

August 5, 2025

The Proliferation of Low-Quality AI-Generated Content and its Contribution to Misinformation

August 5, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Fake Information»The Dissemination of Disinformation: An Examination
Fake Information

The Dissemination of Disinformation: An Examination

Press RoomBy Press RoomAugust 5, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

The Rapid Spread of Disinformation in the Digital Age: Unraveling the Mechanisms and Mitigating the Impact

In today’s interconnected world, the proliferation of disinformation, commonly known as “fake news,” poses a significant threat to informed decision-making and societal cohesion. Unlike traditional forms of misinformation, digitally propagated falsehoods spread with unprecedented speed and reach, penetrating deeply into online communities and influencing public discourse. This phenomenon raises critical questions about why certain individuals are more susceptible to sharing disinformation than others, and how the architecture of social media platforms contributes to this issue. Dr. Joana Gonçalves de Sá, a recipient of the prestigious European Research Council (ERC) grant, is at the forefront of research aiming to address these complex questions. Leveraging data science and behavioral psychology, Dr. Gonçalves de Sá’s FARE (Fake News Sharing: A Petri dish for studying human behavior and decision making) project seeks to disentangle the interplay between individual traits, online contexts, and social media algorithms that fuel the spread of disinformation. Her ultimate goal is to develop a real-time tool to audit search engines and effectively detect disinformation.

The digital age has fundamentally altered the landscape of information dissemination. While disinformation and propaganda have always existed, the speed and global reach facilitated by digital technologies are unprecedented. Social media platforms and search engines employ sophisticated algorithms designed to maximize user engagement, often prioritizing attention-grabbing content over factual accuracy. These algorithms, fueled by user data like past interactions, cookies, and browsing history, create personalized “information bubbles” that reinforce pre-existing beliefs and limit exposure to diverse perspectives. This echo chamber effect can lead individuals to subconsciously accept disinformation as truth, exacerbating societal polarization and inequalities. Dr. Gonçalves de Sá emphasizes the insidious nature of these information bubbles, highlighting their potential to trap users in a cycle of biased information consumption. Her research demonstrates that these platforms exploit inherent human cognitive biases, our tendencies to favor information that confirms our existing beliefs, making us more susceptible to sharing disinformation.

The FARE project’s innovative approach focuses on the act of sharing disinformation rather than the content itself. Dr. Gonçalves de Sá likens the spread of fake news to a “Petri dish” for understanding human behavior and biases. By analyzing what type of biases and contextual factors make individuals more likely to believe and share false information, researchers can gain insights into the decision-making processes that contribute to the spread of disinformation. The project employs a multi-faceted data collection strategy, encompassing over 10,000 news articles, thousands of Twitter user profiles and activity data, and surveys conducted across four continents. Drawing on theories from information systems and epidemiology, Dr. Gonçalves de Sá and her team aim to develop a “macroscope” – a comprehensive analytical tool that combines various datasets and statistical models to track the spread of disinformation in real time, similar to monitoring the spread of a disease. This approach allows them to identify susceptible populations, contagion rates, and the prevalence of disinformation within specific communities.

Recognizing the ethical implications of using big data in social science research, Dr. Gonçalves de Sá’s team prioritizes user privacy. Adhering to the ERC’s strict ethical guidelines, they are developing innovative testing methods that minimize data usage and protect individual privacy. These include testing with limited datasets, avoiding the use of personally identifiable information, and employing techniques like federated learning, a machine learning method that doesn’t require sharing data samples. These privacy-preserving approaches have the potential to serve as valuable models for future research in the field.

The phenomenon of sharing fake news challenges the traditional assumption of humans as rational actors. One might expect individuals to critically evaluate information and refrain from sharing demonstrably false content. However, research suggests that cognitive biases often override rational thought. Individuals tend to overestimate their knowledge and readily share information that confirms their pre-existing beliefs. Dr. Gonçalves de Sá argues that this overestimation of knowledge can be tested and used as an indicator of susceptibility to disinformation. Furthermore, environmental factors play a significant role. Even if an individual is susceptible to disinformation, they are more likely to share it if their social network is receptive to such content. This social reinforcement mechanism contributes to the “viral” spread of disinformation. The socio-economic context also influences an individual’s ability to discern truth from falsehood. Navigating the constant influx of information requires significant cognitive effort, and factors like fatigue and stress can diminish critical thinking abilities, making individuals more vulnerable to disinformation.

Extending the research beyond social networks, Dr. Gonçalves de Sá has received ERC Proof of Concept funding for FARE_Audit, a spin-off project focused on the role of search engines in disseminating disinformation. This project aims to develop a tool that can detect and monitor disinformation in real time within search engine results. The team is investigating how search engine algorithms, influenced by browsing history and cookies, contribute to the creation of disinformation bubbles. By simulating online searches using bots and web crawlers, researchers can analyze how search results vary based on a user’s online profile. They aim to determine whether users who visit low-credibility websites are more likely to be directed towards similar content in subsequent searches, effectively trapping them in disinformation bubbles.

While completely eradicating the spread of fake news may be an insurmountable challenge, Dr. Gonçalves de Sá believes that understanding the factors that contribute to its dissemination is crucial. By identifying the individual susceptibilities, contextual factors, and algorithmic mechanisms that promote the spread of disinformation, researchers can develop targeted interventions aimed at minimizing exposure to fake news and fostering more informed online interactions. Through ongoing research and the development of innovative tools like the FARE_Audit system, Dr. Gonçalves de Sá and her team are making significant contributions to the fight against disinformation, helping to create a more informed and resilient digital landscape.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Media Experts Analyze Viral Military Impersonation Hoax

August 5, 2025

Press Information Bureau Refutes False Statements Attributed to External Affairs Minister S. Jaishankar on India-US Relations by Social Media Handles.

August 4, 2025

The Accuracy of News Judgments: A Systematic Review and Meta-Analysis

August 4, 2025

Our Picks

Do X’s Community Notes Contribute to the Spread of Misinformation?

August 5, 2025

The Proliferation of Low-Quality AI-Generated Content and its Contribution to Misinformation

August 5, 2025

The Dissemination of Disinformation: An Examination

August 5, 2025

BBC News combats misinformation by restoring pre-war aid funding levels.

August 5, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Mitigating Hallucinations, Delusions, and Misinformation in AI-Driven Personal Research

By Press RoomAugust 5, 20250

The Unseen Toll of AI Companionship: Are Chatbots Driving Users to Madness? The rapid proliferation…

A Citizen’s Appraisal of the Consequences of Minor Dishonesties

August 5, 2025

Integrating Collaborative Partnerships into Content Moderation Technologies for Combating Misinformation and Disinformation.

August 5, 2025

Dissemination of Health Misinformation via Artificial Intelligence

August 5, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.