Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Social Media’s Impact on Mental Well-being

July 26, 2025

The Influence of Social Media on Gen Z’s Interest in Religious Vocations.

July 26, 2025

Government Removes “Fake News” Terminology from 2025 Misinformation and Disinformation Bill

July 26, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»AI-Driven Campaign Tactics Undermine Democratic Processes
Disinformation

AI-Driven Campaign Tactics Undermine Democratic Processes

Press RoomBy Press RoomJuly 25, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

The Pandora’s Box of Social Media and Electoral Integrity in Southeast Asia

Social media’s pervasive presence in Southeast Asia, with over half the population actively engaging on various platforms, has revolutionized political campaigning. Political parties, both large and small, increasingly rely on these platforms to enhance their visibility and connect with voters. However, this digital transformation has also unleashed a torrent of problems that threaten the very foundation of electoral integrity in the region. Disinformation, the deliberate spread of false information, has become a significant challenge, manipulating public opinion and potentially swaying election outcomes. The ease with which disinformation can be disseminated online, coupled with the difficulties in rapid fact-checking during short campaign periods, creates a fertile ground for malicious actors to exploit.

The sophistication of disinformation campaigns has grown exponentially. State and non-state actors deploy a range of tactics, including online harassment, the use of illicitly obtained personal data, and sophisticated micro-targeting strategies. The Cambridge Analytica scandal exposed the vulnerabilities of personal data and its potential for misuse in political campaigns. In Southeast Asia, a well-organized industry has emerged, offering “digital campaign manipulation services” to political clients. These services employ “buzzers”—bots, influencers, and cyber-troopers—to generate artificial hype around candidates and smear opponents through revisionist history, hate speech, personal attacks, and viral memes. This orchestrated manipulation undermines genuine public discourse and erodes trust in the democratic process.

Examples of such manipulation are rife in the region. Malaysia’s 2018 general election saw a deluge of anti-opposition bot activity on Twitter. In Indonesia’s 2019 election, both leading candidates employed “buzzers” to spread disinformation and attack each other. The Philippines has witnessed a surge in online public relations consultancies specializing in disinformation campaigns, notably during the 2016 and 2022 presidential elections. These campaigns often utilize influencers with dedicated follower bases to subtly inject political messaging into otherwise non-political content, making it difficult for voters to discern genuine opinions from paid endorsements.

Further complicating the landscape is the emergence of artificial intelligence (AI) tools capable of generating realistic yet fabricated images and videos, known as deepfakes. These tools add another layer of deception to online political campaigns. In Indonesia’s 2024 presidential election, the Prabowo campaign employed AI to create positive images of the candidate, seemingly designed to soften his image in the face of past human rights concerns. This included the creation of an AI image-generation platform, PrabowoGibran.ai, allowing voters to insert themselves into photos with the candidate, blurring the lines between reality and fabrication. These AI-driven tactics raise serious ethical questions about fairness and transparency in elections.

The challenges posed by AI-powered disinformation are multifaceted. The sheer volume of online content, coupled with the speed at which it spreads, makes effective fact-checking and content moderation a daunting task. Encrypted messaging platforms further hinder the detection and tracking of viral falsehoods. Governments and digital platforms often struggle to keep pace with the rapid evolution of these technologies, making it difficult to hold perpetrators accountable and minimize the damage inflicted by disinformation campaigns. The transient nature of online content and the anonymity afforded by some platforms make it challenging to trace the origin and source of malicious content, further complicating efforts to combat it.

Southeast Asian governments have responded with legislative measures aimed at data protection, user privacy, and content moderation. Some countries have implemented frameworks to regulate AI-generated political content, while others have outright bans on deepfakes. However, these legal tools are not without their pitfalls. Vaguely worded laws can be misused to stifle dissent and suppress political opposition. Cases in Thailand and Singapore have highlighted how legislation intended to combat disinformation can be weaponized against journalists and opposition figures, raising concerns about the potential for abuse and the chilling effect on freedom of expression. The challenge lies in striking a balance between regulating harmful content and protecting fundamental democratic rights. Furthermore, the transnational nature of online disinformation requires international cooperation to effectively address the issue.

Social media platforms themselves have implemented rules to monitor and remove harmful content, but these rules vary across platforms and are subject to change. The lack of consistent standards and the potential for arbitrary enforcement create further challenges. Western-based platforms often lack the resources and linguistic expertise to effectively monitor content in the diverse languages of Southeast Asia. Crowdsourced fact-checking initiatives, while potentially valuable, are not a foolproof solution and may not deter dedicated purveyors of disinformation. The constant evolution of technology requires ongoing adaptation and refinement of regulatory frameworks and platform policies.

The pervasive spread of disinformation, coupled with AI-driven manipulation and coordinated bot campaigns, erodes public trust in electoral institutions and the democratic process. The increasing sophistication of AI tools makes it increasingly difficult to identify and hold accountable the creators of malicious content. Legislative measures and self-regulation by digital platforms have proven insufficient to address the multifaceted and rapidly evolving threats posed by online disinformation. A multi-stakeholder approach involving governments, social media platforms, voters, election management bodies, fact-checking organizations, and civil society is crucial to combating this growing menace. International collaboration is essential to develop effective strategies and share best practices to protect the integrity of elections in the digital age.

Governments and election management bodies must engage with tech companies to develop codes of conduct and best practices based on international standards. Continued support for fact-checking organizations and initiatives that promote digital literacy is vital. Fostering robust transnational partnerships is essential to avoid duplication of effort and develop comprehensive strategies that transcend national borders and specific platforms. Equipping citizens with the critical thinking skills to discern credible information from disinformation is crucial to empowering them to navigate the complex digital landscape and make informed decisions. Only through concerted and collaborative action can we hope to safeguard electoral integrity and strengthen democratic processes in the face of these evolving challenges.

The future of democratic elections in Southeast Asia hinges on the ability of stakeholders to effectively address the threats posed by online disinformation. This requires a sustained commitment to collaboration, innovation, and a proactive approach to adapting to the ever-changing landscape of digital technology. Failure to do so risks undermining public trust in democratic institutions and processes, potentially paving the way for further manipulation and erosion of democratic values. The stakes are high, and the need for action is urgent.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Former Polish Soldier Under Investigation for Dissemination of Disinformation Regarding Russian Army on TikTok.

July 25, 2025

Newsrooms Establish Fact-Checking Desks to Combat Misinformation

July 25, 2025

Krakow Prosecutor’s Office Investigates Polish Citizen for Alleged Support of Russia and Dissemination of Ukrainian Disinformation

July 25, 2025

Our Picks

The Influence of Social Media on Gen Z’s Interest in Religious Vocations.

July 26, 2025

Government Removes “Fake News” Terminology from 2025 Misinformation and Disinformation Bill

July 26, 2025

Halifax Councilors Accuse Mayor of Spreading Misinformation, Eroding Public Trust

July 26, 2025

Vulnerability of South Asians to Misinformation Amplified by X’s Community Notes.

July 25, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

The Distortion of Spider Perception Through Misinformation

By Press RoomJuly 25, 20250

Unraveling the Web of Spider Myths: From Fear to Fascination Spiders, with their eight legs…

Community Notes on X Disproportionately Exposes South Asians to Misinformation.

July 25, 2025

CBC Reports Halifax Mayor Accused of Disseminating Misinformation Regarding Official Duties

July 25, 2025

Former Polish Soldier Under Investigation for Dissemination of Disinformation Regarding Russian Army on TikTok.

July 25, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.