SOURCE / ECONOMY
China’s cyberspace authority releases draft measures to regulate anthropomorphic AI interaction services
Published: Dec 27, 2025 06:37 PM
Artificial intelligence Photo: VCG

Artificial intelligence Photo: VCG


China's cyberspace regulator has released a slew of draft measures on the management of anthropomorphic artificial intelligence (AI) interaction services, and is now soliciting public comment on it. 

The draft measures propose a regulatory approach that is inclusive, prudent and tiered, with differentiated oversight based on risk levels. It emphasizes both support for innovation and safeguards against abuse, according to the official WeChat account of Cyberspace Administration of China (CAC) on Saturday.

According to the draft, the measures apply to products and services that use AI technologies to simulate human personality traits, thinking patterns and communication styles, and that provide emotional interaction with the public in China through text, images, audio or video.

The draft explicitly prohibits the generation or dissemination of content that endangers national security, harms national honor or interests, undermines ethnic unity, promotes illegal religious activities, spreads rumors disrupting economic or social order, or involves pornography, gambling, violence or incitement to crime. 

It also bans content that encourages or glamorizes suicide or self-harm, as well as practices such as verbal abuse or emotional manipulation that could harm users' physical or mental health or undermine their dignity.

According to the draft, AI service providers must clearly inform users when they are interacting with AI rather than a human and issue pop-up reminders if users show signs of overdependence or when they first log in or re-enter the service.

Lin Wei, president of Southwest University of Political Science and Law said in an explaining article of the draft published on CAC's WeChat account that breakthroughs in AI technology are propelling human-machine interaction beyond mere functional assistance toward emotional and personalized engagement. While reshaping social interaction paradigms, this evolution also gives rise to a series of new risks and challenges. 

Such risks, characterized by their subtle and transmissible nature, may infringe upon the rights and interests of citizens, and even undermine the foundations of social ethical order and trust. Without effective governance, these risks could severely impede the healthy development of AI and harm the public interest of society, Lin said, who is also vice president of China Law Society.

Lin noted that the release of the draft aligns with national strategic priorities and reflects a forward-looking, precise and systematic approach to AI governance. By clarifying responsibility boundaries for innovation, the measures aim to ensure that technological development remains safe, fair and sustainable.

The draft regulation targets risks arising specifically from anthropomorphism and emotional interaction, focusing on the blurring of human-machine boundaries. It establishes a multi-dimensional risk prevention framework and embeds accountability across key stages of service development and deployment to enable more precise risk control, according to Lin. 

 Lin stressed that the measures provide clearer expectations for the healthy development of anthropomorphic AI services in China, and also offer a practical and forward-looking reference for global governance of similar emerging technologies.

Global Times