The Rise of AI-Fabricated Disinformation in China and the Government’s Response
The rapid advancement of artificial intelligence (AI) has brought about numerous benefits, but it has also opened doors to new forms of misinformation and manipulation. China, like many other countries, is grappling with the growing problem of AI-generated disinformation, which has the potential to disrupt public order and incite panic. Three widely circulated stories in recent years, ranging from inflated mortality rates to fabricated business fines and distorted policy interpretations, highlight the ease with which AI can be used to create convincing but false narratives. These cases, all debunked by authorities, underscore the urgent need for effective regulation and public awareness campaigns to combat the spread of AI-driven falsehoods.
The first case involved a claim that the mortality rate among China’s post-1980s generation had reached a staggering 5.2% in 2024. This alarming statistic, attributed to the Seventh National Census, spread rapidly across social media platforms through various self-published accounts. However, fact-checkers swiftly debunked the claim, revealing that the census data, updated in 2020, couldn’t predict figures for 2024. State broadcaster CCTV attributed the false figure to an "AI computation error," highlighting the potential for inaccuracies even with sophisticated AI models. The incident led to the detention of three individuals who originated the rumor and warnings for six others who disseminated it.
The second case involved a fabricated video depicting a massive fire at an industrial zone in Shaoxing, Zhejiang province. This video, created using AI, was designed to go viral and generate online traffic for profit. The individuals behind the video were sentenced to prison, demonstrating the serious legal consequences of using AI to create and spread false information that could cause public alarm. This case exemplifies the malicious use of AI for personal gain, disregarding the potential for widespread panic and disruption.
The third case revolved around a misrepresentation of a policy change in Guangzhou regarding electric bicycle use. While the city introduced restrictions to improve road safety, an organized group exploited this change by publishing AI-generated articles falsely claiming a complete ban on food delivery services. This fabricated news quickly spread online, igniting concerns among delivery service users and workers alike. The incident highlighted the potential for AI-generated rumors to distort policy interpretations and fuel public anxiety.
These three incidents, among others presented at the 2025 China Internet Civilization Conference, served as stark examples of the growing threat of AI-generated disinformation. The Cyberspace Administration of China (CAC), alongside the China Association for Science and Technology, released details of these cases to guide future regulatory efforts and raise public awareness. The common thread linking these incidents was the malicious or careless use of AI tools to generate and disseminate false information, often with the intent to generate online traffic or profit.
The Chinese government has responded to this emerging challenge by reinforcing its commitment to combating online rumors and implementing regulatory measures to curb the misuse of AI technologies. Existing laws penalize the spread of rumors that disrupt public order, with penalties including detention and fines. More serious violations, such as fabricating reports about disasters or epidemics, can lead to lengthy prison sentences. The government’s focus has shifted towards addressing the unique challenges posed by AI-generated disinformation, recognizing the potential for rapid dissemination and the difficulty in distinguishing between authentic and fabricated content.
To address the specific challenges posed by AI, China has begun implementing regulations targeting deepfake content and AI-generated material. These regulations mandate clear labeling of all AI-created content to distinguish it from real media. Platforms hosting such content are responsible for ensuring accurate labeling, removing harmful content, and establishing mechanisms for refuting rumors. Furthermore, recent guidelines require both users and service providers to mark AI-generated content across various formats, including text, audio, images, and videos, with clear indicators to ensure traceability and accountability. These efforts reflect China’s proactive approach to regulating the use of AI and mitigating the risks associated with its potential for misuse in spreading disinformation. The ongoing development of these regulations signifies a continuous effort to adapt to the evolving landscape of AI technology and its potential societal impact.