X (Formerly Twitter) Identified as Major Source of Disinformation in EU Study
BRUSSELS – A recent European Commission study has revealed alarming levels of disinformation circulating on major social media platforms, with X, formerly known as Twitter, identified as the most significant contributor. The study, conducted by disinformation monitoring startup TrustLab, analyzed over 6,000 unique social media posts across six prominent platforms: Facebook, Instagram, LinkedIn, TikTok, X, and YouTube. Focusing on Spain, Poland, and Slovakia – countries considered particularly vulnerable to disinformation due to upcoming elections and proximity to the conflict in Ukraine – the study assessed the "ratio of discoverability" of disinformation, which represents the proportion of sensitive content that qualifies as disinformation. The results indicated that X exhibited the highest ratio, while YouTube recorded the lowest. These findings prompted a stern warning from EU Values and Transparency Commissioner Vera Jourova, directed at X: "You have to comply with the hard law. We’ll be watching what you’re doing."
The study’s findings are particularly concerning given X’s withdrawal from the EU’s voluntary Code of Practice on Disinformation under Elon Musk’s leadership. Although a signatory since 2018, X’s departure raised concerns about the platform’s commitment to combating disinformation. However, Commissioner Jourova emphasized that X remains subject to the EU’s Digital Services Act (DSA), a comprehensive regulation governing the conduct of large tech platforms. The DSA effectively transforms the voluntary code into a legally binding code of conduct, holding platforms accountable for their content moderation practices. "Mr. Musk knows that he is not off the hook by leaving the code of practice," Jourova asserted, "because now we have the Digital Services Act fully enforced." Non-compliance with the DSA can result in substantial fines of up to six percent of a company’s global turnover.
The timing of the study coincides with heightened concerns about Russian disinformation campaigns targeting European countries. In September, the EU accused social media companies of failing to adequately address the proliferation of Kremlin-backed disinformation since the invasion of Ukraine. The Commission noted a concerning growth in the "reach and influence of Kremlin-backed accounts" in 2023. Jourova described Russia’s strategy as "a multi-million euro weapon of mass manipulation" aimed at undermining democratic values and creating a false equivalence between democracy and autocracy. The threat is particularly acute in the context of the ongoing war in Ukraine and upcoming European elections.
The European Commission’s concerns extend beyond human-generated disinformation to encompass the increasing sophistication of AI-generated content. Recognizing the potential for AI to amplify disinformation campaigns, Jourova revealed ongoing efforts to address this emerging challenge, including a planned meeting with representatives from OpenAI to discuss strategies for mitigating the risks associated with AI-generated disinformation in the lead-up to the elections. This proactive approach underscores the Commission’s commitment to safeguarding the integrity of the European information landscape in an increasingly complex digital environment.
The findings of the study and the subsequent warnings from the European Commission highlight the urgent need for robust content moderation practices on social media platforms. The focus on X underscores the challenges posed by the platform’s evolving policies and its significance in the spread of disinformation. The EU’s firm stance, backed by the legally binding DSA, sends a clear message that platforms will be held accountable for their role in combating disinformation, particularly in the context of sensitive events like elections and geopolitical conflicts. The upcoming European elections serve as a critical test for the effectiveness of the DSA and the commitment of social media platforms to upholding democratic values in the face of evolving disinformation tactics.
The growing threat of AI-generated disinformation adds another layer of complexity to the challenge. The EU’s proactive engagement with leading AI companies, like OpenAI, demonstrates a forward-thinking approach to addressing this emerging frontier in the fight against disinformation. As AI technologies become increasingly sophisticated, the ability to distinguish between authentic and manipulated content will become even more crucial. The collaboration between policymakers and tech companies will be essential to developing effective strategies for mitigating the risks associated with AI-generated disinformation and ensuring the integrity of information ecosystems.