A Network of Disinformation on YouTube Targets Spanish Politicians

A sprawling network of YouTube channels is spreading disinformation about Spanish politicians, fabricating confrontations with European leaders and disseminating false narratives about party affiliations and political scandals. These videos employ sophisticated AI techniques to create realistic yet entirely fabricated scenarios, misleading viewers and potentially influencing public opinion. As of March 12, 2025, Maldita.es identified 49 active channels with over 400,000 subscribers and a combined 32.2 million views, dedicated to propagating this misinformation. Five additional channels previously engaged in similar tactics are now defunct. This network represents a significant escalation in the use of AI-driven disinformation campaigns targeting political figures.

The disinformation campaign centers around fabricated clashes between Spanish and European politicians. Examples include false claims that Italian Prime Minister Giorgia Meloni criticized former Spanish Prime Minister José Luis Rodríguez Zapatero in the European Parliament, European Commission President Ursula von der Leyen "humiliated" Spanish Minister Teresa Ribera, and Portuguese Prime Minister António Costa "crushed" Spanish Prime Minister Pedro Sánchez. These videos often target politicians from the Spanish Socialist Workers’ Party (PSOE), Sumar, and Podemos, but also include attacks on figures from other parties. For instance, false narratives have circulated about Cayetana Álvarez de Toledo leaving the Popular Party (PP) for Vox, and Isabel Díaz Ayuso defecting to Vox after accusing PP leader Alberto Núñez Feijóo of embezzlement.

The channels employ a consistent formula to maximize impact and engagement. Thumbnails often feature AI-generated images of the politicians supposedly confronting each other, accompanied by fabricated quotes. The videos themselves utilize synthetic voices narrating over stock photos of the politicians involved. Titles are designed to be sensationalist, featuring emojis, capital letters, and exclamation marks, with words like "humiliation," "crushing," and "trashing" used to describe the fabricated interactions. This consistent format allows the channels to quickly and efficiently produce a high volume of misleading content.

These channels are relatively new, mostly created in 2024 or 2025. However, some have older creation dates, suggesting a repurposing of existing accounts for disinformation purposes. One example is a channel currently named "Your Real Politics," created in 2013 but with its first currently available video dating back to January 30, 2025. Another channel, initially focused on celebrity gossip and international news, shifted to political disinformation in 2025. This adaptability and repurposing of older channels makes tracking and combating these networks more challenging.

Analysis by Maldita.es, supported by expert opinion from computer engineer Nieves Ábalos, strongly suggests the use of AI-generated synthetic voices in these videos. The speech patterns and intonation exhibit a characteristic monotony and lack of natural variation indicative of digitally created audio. While definitive proof of synthetic voice generation can be difficult to obtain, these observations raise serious concerns about the widespread use of AI in creating deceptive content. The videos also often use AI-generated imagery, further blurring the lines between reality and fabrication.

Despite YouTube’s stated policies against misinformation and unlabeled synthetic content, these videos remain readily accessible. While some videos have been flagged with YouTube’s "Altered or synthetic content" label, many others have not, even those featuring clear signs of AI manipulation. This inconsistency in enforcement raises questions about the platform’s effectiveness in combating AI-driven disinformation campaigns. Furthermore, the continued existence of these channels, despite YouTube’s policy of terminating accounts engaging in harmful misinformation, underlines the challenges platforms face in effectively moderating content and enforcing their own rules. The ongoing presence of these channels highlights the urgent need for more robust and proactive measures to address the spread of AI-generated disinformation on online platforms.

Share.
Exit mobile version