Reframing Disinformation: A Call for an Organized Crime Approach

The Australian government’s current strategy for combating disinformation, centered on content moderation and platform access control, is fundamentally flawed. This approach, akin to mopping up a flood while ignoring the burst pipe, addresses surface-level symptoms without tackling the root cause: the organized networks perpetrating these campaigns. A more effective strategy would be to treat disinformation as organized crime, focusing on dismantling the infrastructure and networks behind it, rather than chasing individual pieces of content. This shift in perspective would allow authorities to target the malicious actors orchestrating these campaigns without impinging on freedom of expression and open discourse, vital components of a healthy democracy.

The existing reliance on content moderation is proving increasingly unsustainable. Human moderation is resource-intensive, while AI-powered solutions lack the nuanced understanding of context, intent, and cultural subtleties necessary to differentiate harmful content from legitimate, albeit controversial, discussion. Furthermore, AI systems often struggle with non-English languages and regional dialects, further limiting their effectiveness. Restricting platform access, such as through age-based bans, also presents significant challenges. Enforcement is difficult, and more importantly, such bans contradict fundamental democratic principles by limiting young people’s access to vital spaces for civic engagement and impeding their development as informed digital citizens.

The analogy to organized crime offers a compelling alternative. Just as laws targeting organized crime focus on the structure and patterns of criminal networks rather than specific commodities traded, disinformation laws should target the scale, coordination, financial flows, and systematic manipulation inherent in these campaigns, not individual pieces of content. This approach would empower governments, social media platforms, and cybersecurity partners to dismantle entire disinformation enterprises, rather than playing a futile game of whack-a-mole with individual posts or resorting to ineffective access bans.

Every disinformation campaign originates with an initiator who intentionally spreads falsehoods to manipulate public perception. This distinguishes disinformation from misinformation, which is unintentionally false. While fact-checking and content moderation have their place, they are insufficient on their own. The real battleground lies in identifying and disrupting the organized networks behind these campaigns, the malicious actors who profit from sowing discord and manipulating public discourse.

This organized crime approach necessitates proving several key elements: criminal intent, demonstrable harm or risk to public safety, structured and coordinated efforts, and the existence of proceeds of crime. The definition of disinformation itself, involving the intent to deceive for malicious purposes, inherently addresses the first two elements. The Australian Security Intelligence Organisation (ASIO) has consistently highlighted the growing threat of foreign interference, now surpassing terrorism as its primary security concern. ASIO’s warnings, coupled with research from the Australian Strategic Policy Institute (ASPI) exposing foreign interference campaigns targeting Australian elections and referendums, clearly demonstrate both malicious intent and the potential for significant harm to individuals, institutions, and society as a whole, driven by financial or political motives.

The element of structured and coordinated effort is also readily demonstrable. Disinformation campaigns are orchestrated by organized networks employing sophisticated tactics, including identity obfuscation, fake news websites, bot networks, coordinated amplification, and exploitation of platform vulnerabilities. Social media companies like Meta, Google, Microsoft, OpenAI, and TikTok are already engaged in detecting and disrupting these operations, demonstrating a clear understanding of the organized nature of these campaigns. Finally, the financial aspect of disinformation operations provides a crucial avenue for intervention. These campaigns are funded enterprises, whether through advertising revenue, fraudulent schemes, or foreign backing. Targeting the financial infrastructure of these networks, including shell companies, suspicious transactions, and the use of compromised accounts, allows authorities to distinguish malicious actors from individuals expressing genuine, albeit controversial, beliefs.

In conclusion, the current approach to disinformation, focused on content and platform access, is inherently flawed, risking either over-censorship that stifles legitimate speech or under-moderation that allows harmful content to proliferate. By reframing disinformation as organized crime, focusing on the networks and financial flows that fuel these campaigns, we can leverage existing legal frameworks and investigative tools to effectively combat this threat without undermining fundamental democratic values. This shift in perspective is not merely a tactical adjustment; it represents a fundamental rethinking of the problem, allowing us to address the root cause of disinformation and protect the integrity of our public discourse.

Share.
Exit mobile version