Brazil Grapples with the Threat of AI-Generated Disinformation Ahead of 2026 Elections
The rapid advancement and proliferation of artificial intelligence (AI) technologies have ushered in a new era of disinformation, capable of generating highly realistic fake content that poses a significant threat to democratic processes and public safety. Brazil, with its upcoming 2026 general elections, finds itself on the front lines of this battle, grappling with the potential for AI-powered "deepfakes" to manipulate public opinion and disrupt the electoral landscape. The concern is not merely theoretical; recent incidents involving manipulated videos targeting high-profile figures like President Lula and Finance Minister Fernando Haddad have underscored the urgency of addressing this emerging challenge. The Brazilian government now recognizes the issue as extending beyond electoral integrity, encompassing public safety and even national defense.
The alarm bells in Brazil are ringing louder in the wake of a recent incident in Argentina, where a deepfake video purportedly showing former President Mauricio Macri endorsing a rival political party may have influenced the outcome of local elections in Buenos Aires. This video, utilizing AI to convincingly replicate Macri’s voice and appearance, highlights the deceptive potential of this technology and its capacity to sow confusion and mislead voters. The incident serves as a stark warning for Brazil, demonstrating how AI-generated disinformation can be deployed to manipulate electoral outcomes and undermine public trust in the democratic process.
Within Brazil, the manipulation of authentic content through AI-driven lip-syncing and audio editing has already targeted key political figures. Fake videos depicting President Lula announcing fictitious social programs and Minister Haddad proposing controversial taxes have circulated widely on social media. The insidious nature of these deepfakes lies in their ability to blend real footage with fabricated elements, creating a veneer of authenticity that can easily deceive viewers. This blending of real and fake content makes these manipulated videos particularly potent tools for disinformation, exploiting public trust in established media sources and blurring the lines between fact and fiction.
Attorney General Jorge Messias, spearheading the government’s efforts to hold digital platforms accountable, has characterized AI as a "dystopian and unregulated technology," emphasizing the unpreparedness of nations to confront its disruptive potential. He underscores the escalating role AI is expected to play in the 2026 elections, posing a far greater challenge than previous threats like the unregulated mass messaging systems exploited in 2018. Messias’s concerns highlight the rapid evolution of disinformation tactics and the need for proactive measures to mitigate the risks posed by AI-generated fake content.
Beyond the electoral implications, AI is also being weaponized in a surge of digital scams targeting vulnerable populations, including children, teenagers, and the elderly. The “Pix crisis,” where scammers capitalized on false rumors regarding taxation of the instant payment system, exemplifies the exploitation of public anxieties through digital manipulation. This incident demonstrates how AI-powered disinformation campaigns can be leveraged for financial gain, preying on the public’s lack of awareness and the rapid spread of misinformation online. Attorney General Messias likens these digital perpetrators to “pickpockets,” highlighting the shift of criminal activity from the streets to online platforms.
The Brazilian government is actively pursuing a multi-pronged approach to combat the threat of AI-generated disinformation. The Attorney General’s Office (AGU), in collaboration with the President’s Chief of Staff and the Presidential Communication Secretariat (SECOM), is developing new legislation to address the issue comprehensively. This legislative effort aims to establish a regulatory framework for AI technologies and digital platforms, providing stronger tools to combat the spread of deepfakes and other forms of manipulated content. While the Supreme Court deliberates on platform liability, the AGU’s National Office for the Defense of Democracy (PNDD) is actively working with social media platforms to remove or flag false and criminal content, focusing on proactive measures to counter the spread of disinformation. The PNDD’s approach increasingly favors labeling misleading content over outright removal, providing users with context and warnings while preserving access to the original material – a strategy deemed less invasive and potentially more effective in combating disinformation.
The collaborative efforts between government agencies and social media platforms are crucial in this fight. YouTube, for instance, employs both AI and human review to identify and address potentially harmful or manipulated content. While acknowledging that the mere use of AI doesn’t necessarily violate platform policies, YouTube emphasizes a case-by-case review process guided by their guidelines and user rights. The PNDD has achieved significant success in securing the removal or flagging of disinformation, particularly during the period surrounding President Lula’s health scare in December 2022, when numerous fake videos were circulating online.
Meta, the parent company of Facebook and Instagram, has yet to publicly comment on its strategies for addressing AI-generated disinformation in Brazil. Their engagement will be critical given the widespread use of their platforms and the potential for these platforms to be exploited for the dissemination of deepfakes and other manipulated content. The challenge for platforms like Meta and YouTube lies in balancing freedom of expression with the need to protect users from harmful and misleading information, a delicate balancing act that requires constant vigilance and adaptation in the face of evolving AI technologies.
The Brazilian government’s proactive stance against AI-generated disinformation signals a growing awareness of the threat this technology poses to democratic processes and public safety. The legislative efforts underway, coupled with the active engagement of the PNDD in flagging and removing harmful content, represent important steps in addressing this challenge. However, the rapidly evolving nature of AI technology necessitates continuous adaptation and collaboration between government, tech companies, and civil society to effectively counter the spread of deepfakes and protect the integrity of information in the digital age. The 2026 elections will serve as a crucial test of these efforts, demonstrating whether the implemented safeguards are sufficient to withstand the onslaught of AI-powered disinformation campaigns. The global community will be watching closely, as the lessons learned in Brazil will undoubtedly inform strategies for combating this emerging threat in other democracies around the world.