2024: A Crucible of Misinformation in a Super Election Year

The year 2024 witnessed a confluence of significant global events – crucial elections across numerous countries, violent regime changes in Bangladesh and Syria, and a deepening geopolitical crisis in the Middle East. Complicating this already volatile landscape was the burgeoning influence of artificial intelligence, which injected a potent mix of misinformation and disinformation into the online sphere. This toxic brew exacerbated polarization, distorted public opinion, and ultimately manifested in real-world consequences. Newschecker, an organization dedicated to combating misinformation, undertook a comprehensive analysis of 2,729 fact-checks conducted between January 2nd and December 11th, 2024, across ten languages, providing a stark picture of the misinformation ecosystem.

Politics Dominates the Misinformation Landscape

With nearly half the world’s population participating in elections across over 70 countries, 2024 was dubbed a "super election year." Unsurprisingly, politics emerged as the dominant theme in the misinformation landscape, accounting for 42% of the debunked claims analyzed by Newschecker. In India, the general elections took center stage, with key narratives revolving around the Maldives’ "India Out" campaign, the Ram temple consecration in Ayodhya, the Bharat Jodo Nyay Yatra, reservation debates, and violence in West Bengal. International events, including elections in other major countries, the overthrow of the Bangladeshi government, and the escalating Israel-Palestine conflict, constituted the second largest category of misinformation, comprising 15% of the fact-checks.

Communal Narratives and Targeted Disinformation

Religious themes formed a significant part of the misinformation landscape, with 13% of debunked claims centered around religion. A concerning 19% of the total claims were identified as communal, with 5% linked to political issues and 8% to religious matters. These narratives often involved allegations of religion-based violence, discrimination, and abuse. The "love jihad" trope persisted, accounting for nearly 1% of the total fact-checked claims. Disturbingly, within the communal claims, 15% targeted the Muslim community, while only 2.3% targeted Hindus, highlighting a clear disparity in online targeting.

Verified Accounts and Mainstream Media: Unwitting Accomplices?

The spread of misinformation was significantly amplified by verified social media pages. A staggering 37% of the fact-checked claims originated from accounts with a blue tick, a marked increase from 26% in 2023. This trend underscores the growing challenge of maintaining trust in social media, particularly after X (formerly Twitter) revised its verification policy. Mainstream media also played a role in disseminating misinformation, with 3% of the debunked claims amplified by news outlets such as Hindustan Times, Indian Express, Sun News, Times of India, NDTV, and Network 18. This highlights the need for increased media literacy and responsible reporting.

Vulnerable Groups Bear the Brunt of Misinformation

The analysis revealed a disturbing increase in misinformation targeting women, rising from 8% in 2023 to 13% in 2024. This trend reflects how socio-political narratives increasingly weaponize misinformation to target women, creating panic or stoking communal tensions. Misinformation targeting Scheduled Castes and Scheduled Tribes remained consistent at 1% between 2023 and 2024. While relatively low, this figure points to the persistent use of misinformation to marginalize vulnerable communities through stereotypes, prejudiced narratives, and debates around the reservation system.

The Rise of AI-Powered Misinformation

Artificial intelligence emerged as a new frontier in the misinformation landscape, with nearly 3% of the debunked claims involving the use of generative AI or AI-altered media. Deepfake technology was employed to create fabricated videos and images, highlighting the potential for AI to be weaponized to exploit public trust. Online scams nearly doubled from 1% in 2023 to 2% in 2024, raising concerns about AI’s future role in misinformation. Examples include deepfakes promoting fraudulent investment schemes and AI-generated audio conversations used to spread political misinformation.

Key Targets of Misinformation in 2024

The Newschecker analysis identified several prominent targets of misinformation campaigns. Among political leaders, Rahul Gandhi, Narendra Modi, and K Annamalai topped the list. Women politicians, including Mamata Banerjee, Kangana Ranaut, and Kanimozhi, faced significant online attacks. The BJP, Congress, and DMK were the most targeted political parties, while opposition-ruled states like Tamil Nadu and Kerala, along with Uttar Pradesh, were frequently targeted by misinformation. The Election Commission and EVMs also saw a surge in misinformation, particularly during the election period. Internationally, Sheikh Hasina, Donald Trump, and Benjamin Netanyahu were frequent targets, while Israel, Lebanon, and Iran were the most targeted countries. Celebrities and business figures were not immune, with Virat Kohli, Cristiano Ronaldo, Salman Khan, Gautam Adani, Ratan Tata, and NR Narayana Murthy facing various misinformation campaigns.

The Form and Substance of Misinformation

Outright false claims constituted 65% of the debunked misinformation, while misleading claims using out-of-context images and videos accounted for 17%. Images and videos with misleading overlays were the most common form, comprising 70% of the total shared misinformation, followed by standalone videos (17%), images (7%), and text-based claims (4%). The prevalence of visual media underscores the need for enhanced media literacy and critical evaluation of online content. The 2024 misinformation landscape reveals a complex interplay of political motivations, technological advancements, and targeted campaigns aimed at exploiting vulnerabilities and shaping public discourse. The rise of AI-powered misinformation necessitates proactive measures to combat its spread and protect individuals and communities from its harmful effects.

Share.
Exit mobile version