Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

DC Police and Advocates Address Social Media Misinformation Regarding Missing Persons

July 1, 2025

Chesapeake Bay Foundation Perpetuates Inaccurate Claims Regarding Menhaden.

June 30, 2025

Ukraine Forewarns of Potential Russian Disinformation Campaign in Advance of BRICS Summit

June 30, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Social Media»The Impact of Misinformation on Higher-Order Evidence in the Humanities and Social Sciences
Social Media

The Impact of Misinformation on Higher-Order Evidence in the Humanities and Social Sciences

Press RoomBy Press RoomDecember 19, 2024
Facebook Twitter Pinterest LinkedIn Tumblr Email

Unreliable Information and Belief Formation in Social Networks: A Simulation Study

This article explores how unreliable information impacts belief formation within social networks. Using computational simulations, we examine how individuals update their beliefs when exposed to both accurate and inaccurate reports from their peers. The study focuses on two types of unreliable agents: misinformants, who unintentionally spread false information, and disinformants, who deliberately mislead others. We also consider two distinct information processing strategies: a gullible approach, where individuals fully trust all incoming information, and an aligned approach, where individuals discount information based on the known level of unreliability in the network. The simulations reveal the complex interplay between the type of unreliable agent, the information processing strategy employed, and the overall reliability of the network in shaping collective beliefs.

Our simulations build upon the foundational work of Bala and Goyal (1998) and Zollman (2007), modeling social networks as graphs where nodes represent individuals and edges represent communication channels. Each agent holds a credence, a degree of belief between 0 and 1, regarding the superiority of a hypothetical option "B" over "A". Agents whose credence exceeds 0.5 undertake trials to test this hypothesis and share their results with their neighbors. Initially, each agent’s credence is randomly assigned. In the basic model, agents update their beliefs using Bayes’ rule, incorporating both their own experiences and the reports they receive. This process iterates until the network reaches a consensus, where all agents either strongly believe B is superior or A is superior, or a pre-defined limit on simulation steps is reached.

O’Connor & Weatherall (2018) introduce the concept of homophily, where individuals trust those with similar beliefs more readily. Our simulations, however, explore a different dynamic. We incorporate higher-order evidence regarding the reliability of the shared information, allowing agents to adjust their trust accordingly. This distinction is crucial, as it shifts the focus from the similarity of beliefs to the trustworthiness of the information source. Unlike models based on homophily, our simulations discount evidence based on the known level of unreliability within the network.

Two distinct types of unreliable agents are introduced into our simulations: misinformants and disinformants. Misinformants, due to incompetence or other factors, provide inaccurate information that is essentially neutral in its impact. Their reports neither support nor refute the hypothesis being tested, resembling random noise. Disinformants, on the other hand, actively seek to deceive by reporting the opposite of their observations. Their reports are specifically designed to mislead and undermine the accurate assessment of the hypothesis. These two types represent distinct challenges to accurate belief formation within the network.

To counter the effects of unreliable information, two information processing strategies are considered: gullible and aligned. Gullible agents ignore the presence of unreliable agents and trust all incoming information indiscriminately, applying Bayes’ rule directly. While this might seem naive, arguments from authors like Burge (1993), Coady (1992), and Reid (1983) suggest a prima facie justification for trusting testimony. In contrast, aligned agents incorporate the network’s reliability level into their belief updates. They discount evidence based on the known proportion of unreliable agents, effectively dampening the impact of potentially misleading reports. This strategy aligns their confidence in the evidence with the overall trustworthiness of the network.

Our simulations examine the performance of these strategies under varying levels of network reliability. We conducted simulations on complete networks of 64 agents, testing scenarios with no unreliable agents, misinformants, and disinformants. Network reliability was set at 75%, 50%, and 25%, representing different levels of trustworthiness. For each set of parameters, 500 simulations were run, allowing for robust analysis of the observed patterns. The simulations tracked the evolution of beliefs until a consensus was reached or a maximum of 20,000 steps was completed.

The simulations commenced with the standard Bala-Goyal model, assuming perfect reliability (equivalent to both gullible and aligned strategies at 100% reliability). Subsequently, simulations incorporated unreliable agents (misinformants and disinformants) and varied the network reliability. This allowed us to isolate the impact of unreliable information and different response strategies on the final consensus reached. The results of these simulations shed light on the effectiveness of different information processing strategies in navigating environments with varying levels of misinformation and disinformation.

Our findings highlight the interplay between the type of unreliable agent, the information processing strategy, and the network’s reliability. In perfectly reliable networks, both gullible and aligned strategies converge on the true hypothesis. As reliability decreases, the strategies diverge, with the aligned strategy demonstrating greater resilience to misinformation. When disinformants are present, the aligned strategy consistently outperforms the gullible strategy, preventing the spread of deliberately false information. These results underscore the importance of incorporating higher-order evidence about reliability when evaluating information within social networks.

The simulations demonstrate that simply trusting all information can be detrimental in the presence of misinformation and especially disinformation. The aligned strategy, by discounting evidence based on reliability, offers a more robust approach to belief formation in unreliable environments. This study provides valuable insights into the dynamics of belief formation in online social networks and other information ecosystems where the reliability of information sources can vary significantly. Further research could explore the impact of network structure and other factors on these dynamics.

In conclusion, our simulations reveal the complex interplay of unreliable information and information processing strategies in shaping collective beliefs. The aligned strategy, incorporating higher-order evidence of reliability, demonstrates resilience to both misinformation and disinformation. These findings offer important lessons for understanding how individuals can navigate the complexities of information processing in today’s interconnected world, where unreliable information is a pervasive challenge. The research emphasizes the crucial role of critical evaluation and the integration of reliability assessments in forming accurate beliefs within social networks and other information ecosystems.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

The Impact of Social Media, Disinformation, and AI on the 2024 U.S. Presidential Election

June 29, 2025

Limerick College Launches Forum on Misinformation

June 29, 2025

Combating Misinformation on Social Media: The Role of Artificial Intelligence

June 28, 2025

Our Picks

Chesapeake Bay Foundation Perpetuates Inaccurate Claims Regarding Menhaden.

June 30, 2025

Ukraine Forewarns of Potential Russian Disinformation Campaign in Advance of BRICS Summit

June 30, 2025

Analysis of Misinformation Spread by Alabama Arise Regarding “Big, Beautiful Bill”

June 30, 2025

Michigan Supreme Court Declines Appeal in Election Disinformation Robocall Case

June 30, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

AI-Generated YouTube Videos Propagate Misinformation Regarding Diddy Controversy.

By Press RoomJune 30, 20250

The Rise of AI-Generated Disinformation on YouTube: A Deep Dive into the "Diddy" Phenomenon In…

UN Expert Advocates for Decarbonizing the Global Economy and Penalizing Fossil Fuel Companies for Climate Disinformation

June 30, 2025

Ex-Newsnight Anchor Cautions Against Impending Flood of Misinformation

June 30, 2025

Sino-Russian Cooperation in International Information Warfare

June 30, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.