Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Australia Holds Social Media Companies Accountable for Misinformation

July 1, 2025

The Dissemination of Misinformation Regarding Transgender Healthcare and Its Influence on Progressive Ideology.

July 1, 2025

Sprout Social Achieves Industry Leadership with 164 G2 Leader Awards in Social Media Management.

July 1, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Social Media»Automated Accounts Disseminate Climate Disinformation and Conspiratorial Content.
Social Media

Automated Accounts Disseminate Climate Disinformation and Conspiratorial Content.

Press RoomBy Press RoomDecember 16, 2024
Facebook Twitter Pinterest LinkedIn Tumblr Email

Bot-Driven Climate Disinformation Campaign Uncovered on X (Formerly Twitter)

A recent investigation has revealed a coordinated network of bot-like accounts spreading climate disinformation and conspiracy theories on X, the social media platform formerly known as Twitter. These accounts, identified through their prolific posting, low original content, and rapid creation in response to specific events, actively engaged in political discussions, amplifying divisive narratives and fostering mistrust against climate action proponents. This activity raises serious concerns about the integrity of online discourse and the manipulation of public opinion on critical issues like climate change.

The investigation, conducted by researchers, initially focused on 45 bot-like accounts that participated in UK political discourse, promoting content related to the "Great Replacement" conspiracy theory and responding to global events with racist remarks and misinformation. Further analysis revealed that a significant portion of these accounts were also engaged in spreading climate denial and conspiratorial content. Hashtags like #ClimateScam, #NetZeroScam, and #ClimateCult were frequently used, often in conjunction with other conspiracy-related hashtags. This coordinated dissemination of disinformation paints climate action as a sinister plot, further polarizing the already contentious debate surrounding climate change.

The disinformation campaign employed several key narratives. One prominent claim portrayed climate action as a threat to the natural world and human life, alleging that solar panels and wind farms cause significant environmental damage. This narrative frames climate advocates as a destructive "cult" demanding sacrifices from the public. More extreme claims suggested that the climate movement aims to deliberately reduce people’s quality of life or that "geoengineering" is a malicious scheme to inflict illness and suppress dissent.

Another prevalent narrative depicted climate action as a cover-up by elites seeking profit and control. These accounts argued that environmental policies are merely tools for governments to seize power and wealth, potentially even weakening the population through deliberate illness. This narrative resonates with existing anti-establishment sentiments and fuels distrust in government initiatives. The investigation also uncovered support for "blade runners," individuals who vandalize ULEZ cameras as a form of protest against perceived government surveillance and overreach, framing environmental policies as infringements on personal freedom.

While these bot-like accounts did not exclusively focus on climate discussions, mentioning "climate" or "climate change" relatively infrequently compared to their overall posting volume, the topic remained a consistent thread in their activity. These accounts often bundled their climate denial stance with other political positions, suggesting a deliberate effort to create a cohesive, albeit distorted, ideological framework. Opposition to climate action was frequently linked with opposition to LGBTQIA+ rights, vaccination, and involvement in international conflicts like the war in Ukraine. Conversely, support for these causes was framed as part of a "globalist" or "far-left fascist" agenda, further reinforcing the divisive rhetoric.

Interestingly, not all bot-like accounts in the study opposed climate action. Some profiles expressed support for environmental initiatives, often aligning their pro-climate stance with traditionally left-wing viewpoints like supporting the Labour Party, opposing the Conservative Party, or advocating for rejoining the EU. This suggests a more nuanced manipulation strategy, potentially aiming to create an illusion of balanced discourse while still promoting disinformation and exploiting pre-existing political divides.

The investigated accounts did not operate in isolation. They frequently shared links to external websites known for disseminating false health and climate information, as well as content from accounts promoting conspiracy theories. Further analysis revealed interactions between these bot-like accounts and other users with substantial followings, indicating a potential network of coordinated amplification. Evidence even suggested connections between some bot-like accounts and other profiles sharing similar names and profile images, raising suspicions about their coordinated creation and purpose.

These findings underscore the urgent need to address the proliferation of bots and their role in spreading disinformation. The manipulative potential of social media platforms, particularly their ability to amplify divisive and harmful content, poses a significant challenge to democratic discourse and informed decision-making. As the climate crisis demands evidence-based solutions and collaborative action, the spread of misinformation undermines public trust and hinders progress towards effective climate policies.

The researchers urged X to thoroughly investigate the identified bot-like accounts and enforce its policies against platform manipulation. X’s policies explicitly prohibit artificially amplifying information or engaging in behavior that disrupts user experience. However, the platform’s response to the researchers’ findings was dismissive, claiming a lack of evidence of platform manipulation and questioning the researchers’ methodology. This highlights the ongoing challenge of holding social media platforms accountable for addressing the spread of disinformation on their platforms.

The lack of transparency from social media companies, coupled with the increasingly restricted access to data for researchers, limits the ability to definitively identify and analyze bot activity. This underscores the need for stronger regulations requiring greater transparency from platforms. Without such measures, researchers and the public are left reliant on the platforms’ own assessments, which can be inconsistent and缺乏透明度. The prevalence of bot-driven disinformation campaigns necessitates a concerted effort from platforms, policymakers, and researchers to safeguard online discourse and ensure that critical conversations about climate change and other pressing issues are not undermined by manipulative tactics.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Australia Holds Social Media Companies Accountable for Misinformation

July 1, 2025

Fact Check: Debunking False Reports of Nationwide Traffic Law Changes on Websites and Social Media

July 1, 2025

Mitigating Online Disinformation and AI Threats: Guidance for Electoral Candidates and Officials

July 1, 2025

Our Picks

The Dissemination of Misinformation Regarding Transgender Healthcare and Its Influence on Progressive Ideology.

July 1, 2025

Sprout Social Achieves Industry Leadership with 164 G2 Leader Awards in Social Media Management.

July 1, 2025

Fact Check: Debunking False Reports of Nationwide Traffic Law Changes on Websites and Social Media

July 1, 2025

Mitigating Online Disinformation and AI Threats: Guidance for Electoral Candidates and Officials

July 1, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Government Project Selects Originator Profile Development to Combat AI-Generated Misinformation

By Press RoomJuly 1, 20250

Government Backs Cutting-Edge Digital Technology to Combat Disinformation and Enhance Online Trust TOKYO – In…

The Impact of AI-Driven Disinformation on the Upcoming Election

July 1, 2025

Proposed Danish Legislation Criminalizes Dissemination of Deepfake Images to Combat Misinformation

July 1, 2025

Brazilian Ambassador Condemns Disinformation Campaign Targeting Mercosur Agreement

July 1, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.