Fake Social Media Profiles Infiltrate Australian Election Discourse, Raising Concerns About Electoral Integrity
The Australian political landscape is facing a growing threat from an influx of fake social media accounts that are actively spreading disinformation and manipulating public opinion during the current election campaign. An analysis conducted by disinformation detection company Cyabra revealed that nearly one in five accounts engaging in election-related discussions on X (formerly Twitter) were fake, employing AI-generated images and emotionally charged language to sway public sentiment. One such account, posting over 500 times, reached an audience of approximately 726,000 users, highlighting the scale and potential impact of these sophisticated disinformation campaigns on electoral integrity.
Cyabra’s "Disinformation Down Under" report details how these bots targeted both Prime Minister Anthony Albanese and Opposition Leader Peter Dutton. While some bots focused on undermining Albanese and the Labor Party by amplifying narratives about government incompetence, economic mismanagement, and excessive progressivism, others pushed pro-Labor messages to create a false impression of widespread support. This coordinated effort, according to Cyabra, aimed to erode public trust in the government ahead of the election. The report highlighted the strategic use of hashtags like "Labor fail" and "Labor lies," as well as derisive nicknames for the prime minister, as examples of the bots’ tactics.
The analysis, which examined public discourse surrounding the Labor and Coalition parties throughout March, found that these fake accounts employed various tactics, including ridicule, emotionally charged language, satire, and memes, to maximize visibility and engagement. While both major parties were targeted, the strategies differed. The Coalition faced attacks from pro-Labor posts using hashtags such as "Dutton must go" and "LNP corruption party," aiming to portray Dutton as out of touch and the party as corrupt and incompetent. This, Cyabra suggests, created an artificial sense of support for the current administration, reinforcing partisan sentiment.
The pervasiveness of these bots, outperforming real users on several occasions, allowed them to dominate the online narrative and potentially distort public opinion, drowning out authentic voices and manipulating the political conversation. Determining the source of these targeted misinformation campaigns, however, remains a challenge. The intricate nature of these operations makes it difficult to pinpoint the individuals or groups orchestrating these efforts.
Acting Australian Electoral Commissioner Jeff Pope has expressed concern about the potential impact of such campaigns on electoral integrity. While the actual incidents of AI-driven interference in the 2024 global elections were relatively low, the potential for disruption remains a concern. This year, considered the "year of elections" with half the world’s population casting ballots, has highlighted the increasing vulnerability of democratic processes to online manipulation.
Beyond the immediate impact on electoral outcomes, the rise of online disinformation raises broader concerns about the health of democratic discourse. Michael Wesley, deputy vice chancellor of Melbourne University, pointed to the violent Capitol riots in the United States as a stark example of the dangers of increasing political polarization, the rise of extremist political actors, and the erosion of trust in established institutions like the media, government, and universities. The proliferation of fake online profiles and the spread of misinformation contribute significantly to these trends, undermining public trust and exacerbating societal divisions. Addressing this growing threat requires a multi-faceted approach involving increased media literacy, improved social media platform regulation, and ongoing research into the evolving tactics of disinformation actors.