Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Russian Ceasefire Declaration Deemed Deceptive Tactic.

August 6, 2025

The Influence of Misinformation Reporting on Public Perception and Trust

August 6, 2025

Democrats Reiterate Claims of Russian Disinformation

August 6, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Social Media»AI-Generated Imagery and Emotionally Manipulative Language Used by Bots to Influence Online Discourse During Federal Election
Social Media

AI-Generated Imagery and Emotionally Manipulative Language Used by Bots to Influence Online Discourse During Federal Election

Press RoomBy Press RoomApril 21, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Australia Grapples with Deluge of Fake Social Media Accounts During Election Campaign

The integrity of Australia’s recent election campaign has been called into question following a revelation of widespread disinformation tactics employed on social media platforms. A report by disinformation detection company Cyabra has uncovered a significant presence of fake accounts on X (formerly Twitter) actively participating in political discussions, reaching millions of Australian voters. These accounts, estimated to comprise nearly one-fifth of election-related profiles analyzed, employed artificial intelligence-generated images and emotionally manipulative language to disseminate biased narratives. One particularly active account, posting over 500 times, reached an audience of approximately 726,000 users, demonstrating the scale and potential impact of these coordinated disinformation campaigns.

Cyabra’s "Disinformation Down Under" report details how these bot accounts targeted both Prime Minister Anthony Albanese and Opposition Leader Peter Dutton with distinct strategies. While the fake accounts aimed to discredit Albanese and undermine his political standing by amplifying messages about the Labor government’s alleged incompetence, economic mismanagement, and progressive policies, the opposition was targeted with pro-Labor narratives, creating a false impression of widespread support for the incumbent administration. The bots employed hashtags like "Labor fail" and "Labor lies" while also resorting to ridicule and name-calling, further fueling the polarized online environment. Conversely, fake profiles sought to portray Dutton as out of touch and inept while labeling the coalition as broadly incompetent and corrupt. This two-pronged approach maximized the spread of disinformation and contributed to the erosion of public trust in the political process.

The sophistication of these disinformation campaigns is evident in the bots’ strategic use of emotionally charged language, satire, and memes to maximize visibility and engagement. By exploiting the virality of such content, the fake accounts were able to effectively disseminate their fabricated narratives and manipulate the online conversation. The analysis, conducted throughout March, used AI technology to identify patterns of inauthentic activity, including posting frequency, language usage, and hashtags employed. This revealed coordinated efforts to push specific narratives designed to sway public opinion. The sheer volume of bot activity at times eclipsed genuine user engagement, allowing these fake accounts to dominate the narrative and drown out authentic voices.

The report highlights the significant implications of these findings for electoral integrity. The ability of malicious actors to create and deploy large numbers of fake accounts to spread disinformation poses a serious threat to democratic processes. By manipulating online discourse, these actors can potentially influence public opinion, suppress legitimate voices, and create an environment of distrust and division. The fact that these bots were able to reach such a large audience underscores the vulnerability of social media platforms to manipulation and the urgent need for more effective measures to combat disinformation.

While the impact of these disinformation campaigns on the election outcome remains difficult to quantify, the sheer scale of the operation raises serious concerns. The manipulation of online discourse through coordinated bot activity can erode public trust in democratic institutions and processes. Furthermore, the emotional nature of the content disseminated by these accounts can exacerbate existing societal divisions and fuel political polarization. The findings of this report serve as a wake-up call for social media platforms, policymakers, and the public to address the growing threat of disinformation and protect the integrity of democratic elections.

The increasing use of AI in generating fake profiles and content poses a significant challenge to electoral integrity. While the prevalence of actual incidents impacting elections in 2024 was relatively low, according to the Australian Electoral Commission, the potential for manipulation remains a serious concern. The difficulty in identifying the individuals or groups orchestrating these campaigns further complicates the issue. Addressing this growing threat effectively requires a multi-faceted approach involving increased platform accountability, enhanced media literacy among the public, and robust legal frameworks to deter and punish those engaging in disinformation tactics. The future of democratic elections hinges on the ability to ensure that public discourse is not hijacked by malicious actors seeking to undermine trust and manipulate outcomes.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

The Deliberate Undermining of Democracy by Radical Right-Wing Populists Through Misinformation

August 6, 2025

The Dissemination and Persistence of Anti-Immigrant Disinformation in the United Kingdom

August 6, 2025

The Southport Riot: A Catalyst for Anti-Immigrant Disinformation

August 4, 2025

Our Picks

The Influence of Misinformation Reporting on Public Perception and Trust

August 6, 2025

Democrats Reiterate Claims of Russian Disinformation

August 6, 2025

Study Reveals AI Chatbots Propagate Medical Misinformation, Underscoring the Need for Enhanced Safeguards

August 6, 2025

Social Media’s Influence on Worldview

August 6, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Parliamentary Commission Releases Statement on Disinformation Campaign Against Azerbaijan

By Press RoomAugust 6, 20250

Azerbaijan Grapples with Sophisticated Disinformation Campaign Orchestrated from Neighboring Territory Baku, Azerbaijan – The Azerbaijani…

AI-Generated Misinformation in Africa Remains Unchecked by Meta, According to Fact-Checkers and Human Rights Organizations

August 6, 2025

PNP Partners with PIA to Strengthen Campaign Against Disinformation

August 6, 2025

AI Chatbots Shown to Disseminate Medical Misinformation: Underscoring the Need for Caution.

August 6, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.