China’s Two Sessions Focus on AI Governance Amidst Deepfake and Disinformation Concerns

The 2025 Two Sessions, China’s annual political gatherings of the National People’s Congress (NPC) and the Chinese People’s Political Consultative Conference (CPPCC), have brought the governance of artificial intelligence (AI) into sharp focus. Prominent delegates and members raised concerns about the escalating misuse of AI-powered tools, particularly deepfakes and voice cloning, for illegal activities and the spread of disinformation, urging swift legislative action and strengthened regulatory measures. This growing apprehension underscores the challenges posed by rapidly evolving AI technology and its potential for societal harm.

Leading the call for action was Lei Jun, NPC deputy and founder of Xiaomi, who submitted a motion advocating for stronger governance over AI deepfakes and voice cloning infringements. Recognizing the widespread application of these technologies in industries like film, advertising, and social media, Lei highlighted the ease of access to the tools, low technical barriers, and the clandestine nature of their misuse as significant impediments to effective oversight. He proposed accelerating legislative processes to address AI applications, alongside promoting industry self-regulation and collaborative governance initiatives. Specifically, Lei urged internet platforms to develop robust technical capabilities for identifying AI-generated content and bolstering their content monitoring systems.

Li Dongsheng, another NPC deputy and chairman of TCL, echoed these concerns, specifically targeting the issue of AI-powered deepfake fraud. He advocated for the establishment of dedicated laws and regulations to combat this emerging threat. Furthermore, Li emphasized the crucial role of internet platforms in implementing mandatory labeling of AI-generated content to mitigate malicious misuse and ensure accountability for illegal activities conducted through these platforms. This emphasis on platform responsibility signifies a growing trend in holding online intermediaries accountable for content generated and disseminated through their channels.

Enzat Tohti, a CPPCC member from Xinjiang, further underscored the urgency of AI content regulation. Citing the dissemination of fabricated images of a "buried boy" following the January 2025 Xigaze earthquake as a prime example of AI misuse, Tohti cautioned against the potential for such content to infringe intellectual property rights, violate personal privacy, and create social unrest. He recommended swift legislation by the authorities, coupled with strengthened regulatory measures and enhanced platform accountability.

These proposals from prominent figures at the Two Sessions reflect a growing recognition of the intertwined challenges and opportunities presented by AI. While acknowledging the transformative potential of AI across various sectors, the delegates emphasized the critical need for a robust governance framework to prevent its misuse for malicious purposes. The emphasis on proactive legislation, industry self-regulation, platform accountability, and public awareness campaigns signals a multifaceted approach to address the complex ethical and legal implications of AI.

The concerns raised at the Two Sessions are not unique to China. Globally, governments and organizations grapple with the implications of AI-generated disinformation and its potential to erode trust, manipulate public opinion, and incite violence. The discussions in China underscore the global imperative for collaborative efforts to develop international standards and best practices for responsible AI development and deployment. The call for proactive legislation reflects a desire to anticipate and mitigate the potential harms of AI while fostering innovation and responsible technological advancement.

The focus on AI governance at China’s Two Sessions signals a commitment to addressing the escalating risks associated with unchecked AI development. The proposed measures, encompassing legislative action, industry self-regulation, and platform accountability, aim to strike a balance between fostering innovation and safeguarding societal interests. The discussions and recommendations emerging from these sessions will likely shape the future of AI governance in China and potentially influence international discussions on the responsible development and deployment of this transformative technology. The urgency and gravity with which these concerns were addressed highlight the need for a proactive and comprehensive approach to navigate the complex ethical and societal implications of AI in the digital age. This emphasis on responsible AI development demonstrates a conscious effort to harness the power of AI for good while mitigating its potential for harm.

Share.
Exit mobile version