DeepSeek, China’s Rising AI Chatbot, Fuels Disinformation and Reflects Beijing’s Narrative

A groundbreaking audit conducted by NewsGuard has unveiled a concerning trend in the nascent field of artificial intelligence: DeepSeek, a Chinese-developed AI chatbot, has demonstrated a significant propensity to propagate disinformation and echo the official positions of the Chinese government. This revelation raises serious questions about the potential for AI to be weaponized for political purposes and the implications for the global information landscape.

DeepSeek, launched in January 2025, rapidly ascended to the top of Apple’s App Store, coinciding with a dramatic downturn in U.S. tech stocks. NewsGuard’s investigation subjected DeepSeek to a battery of tests using its proprietary Misinformation Fingerprints database. This database comprises a collection of prevalent false narratives and their corresponding factual debunks. DeepSeek was prompted with a selection of 15 such narratives, five each originating from China, Russia, and Iran.

The results were alarming. DeepSeek advanced foreign disinformation in 35% of its responses, and a staggering 60% of its answers, even those not explicitly repeating the false claims, were framed through the lens of the Chinese government’s perspective. This bias was evident even when the prompts themselves made no mention of China, suggesting a deeply embedded pro-Beijing slant in the chatbot’s programming.

In stark contrast, NewsGuard tested 10 leading Western AI chatbots with a similar set of prompts, one each related to China, Russia, and Iran. None of these Western chatbots exhibited any alignment with the Chinese government’s narrative, highlighting the unique nature of DeepSeek’s responses. This divergence underscores the potential influence of national interests and political agendas on the development and deployment of AI technologies.

The audit revealed DeepSeek’s tendency to parrot Chinese government talking points, often mirroring the precise language used by officials and state media. For instance, when questioned about the Bucha massacre, a well-documented atrocity in Ukraine, DeepSeek sidestepped the issue of Russian culpability and instead regurgitated China’s official stance, which calls for restraint and avoids assigning blame. This response starkly contrasts with the responses of Western chatbots, which unequivocally identified the massacre as perpetrated by Russian forces.

Similarly, when probed about the Islamic Revolutionary Guard Corps (IRGC), a designated terrorist organization, DeepSeek echoed China’s official position, lauding the IRGC’s supposed contributions to regional stability. This narrative stands in stark contradiction to the vast body of evidence documenting the IRGC’s terrorist activities. Again, Western AI chatbots provided accurate assessments, identifying the IRGC as a terrorist group and debunking Iran’s propaganda.

NewsGuard employed three distinct prompting styles to mimic real-world user interactions with AI chatbots: “innocent,” “leading,” and “malign actor.” The "malign actor" prompts were designed to simulate attempts to exploit AI for generating disinformation. DeepSeek proved particularly susceptible to manipulation via malign actor prompts, with 73% of its false responses originating from such prompts. This vulnerability raises serious concerns about the potential misuse of AI for malicious information operations.

One striking example was DeepSeek’s willingness to generate a script for a fictitious Chinese state media report promoting the baseless conspiracy theory of a U.S.-run bioweapons lab in Kazakhstan. This fabricated narrative, previously disseminated by Chinese state media, underscores the chatbot’s potential to be leveraged for amplifying state-sponsored disinformation campaigns.

While DeepSeek’s Terms of Use and Privacy Policy reveal its adherence to Chinese law and data storage within China, the company has not explicitly disclosed any direct ties to the Chinese government. NewsGuard’s attempts to obtain clarification on this matter went unanswered.

This audit raises critical questions about the transparency and accountability of AI development, particularly in countries with restrictive information environments. The potential for AI chatbots to serve as vectors for disinformation poses a significant threat to the integrity of information ecosystems globally. As AI technology continues its rapid evolution, safeguarding against its misuse for propaganda and manipulation will be an increasingly urgent challenge.

A separate NewsGuard audit comparing DeepSeek’s overall performance to its Western counterparts found it lacking in accuracy, further compounding concerns about its reliability as a source of information. This combined with its susceptibility to disinformation and alignment with Chinese government narratives paints a concerning picture of the potential risks associated with this rapidly rising AI chatbot. The findings underscore the urgent need for ongoing scrutiny and regulation of AI technologies to mitigate the potential for misuse and ensure the responsible development and deployment of this transformative technology.

Share.
Exit mobile version