ARTS / CULTURE & LEISURE
Platforms urged to enforce pre-reviews and traceability to counter AI face theft
Published: Apr 07, 2026 10:34 PM
Illustration: Liu Xiangya/GT

Illustration: Liu Xiangya/GT

"AI does not understand that sometimes silence speaks louder than shouting, and imperfection is perfect… That one percent injection of 'lived-in soul' is what makes AI works stand out," wrote Feng Shengyong, director of the TV Drama Department of the National Radio and Television Administration (NRTA), in an article published by the People's Daily on Tuesday. 

While acknowledging the efficiency revolution brought by AI, Feng also warns of the risk of "disorder" as AI-generated content becomes increasingly indistinguishable from reality, making the protection of personal rights and interests an urgent issue. 

Experts noted that the solution lies in clearly strengthening platforms' pre-review and material traceability obligations, and shifting the burden of proof to content creators. 

The urgency of this issue has been highlighted by a spate of high-profile AI face theft cases. On Sunday, Chinese star Yi Yangqianxi, also known as Jackson Yee, released a statement condemning the unauthorized use of his likeness in AI short dramas on multiple platforms and demanding their immediate removal. 

A day later, Hongguo, a free short drama streaming platform, issued an announcement about governing the illegal use of materials in AI short dramas. The announcement stated that in the first quarter of 2026, the platform had removed 1,718 AI-generated comic-style short dramas that violated its governance norms. Moreover, the platform launched a special centralized rectification campaign that completed comprehensive inspections of 15,000 works, disposed of 670 illegal works in accordance with regulations, and exposed four types of typical cases, the China News reported on Tuesday.

Earlier, a Hanfu (a type of traditional clothing of the Han ethnic group) enthusiast became a victim of face theft by an AI short drama in which his image was deliberately defamed and spread on platforms such as Hongguo, sparking a trending topic on China's X-like platform Sina Weibo. 

From top celebrities like Yee, Xiao Zhan, Yang Zi and Dilraba Dilmurat, whose likenesses have been copied to drive traffic for short dramas, to even ordinary people reduced to "raw materials" in the AI content industry chain, AI face theft has spread from celebrities to the general public. 

Alongside face theft, "voice theft" is also rampant. More than 20 dubbers recently issued a joint statement on social media, demanding an end to the unauthorized collection of their voices for AI training and commercialization. 

In early April, the Actors Committee of the China Federation of Radio and Television Associations spoke out against such infringements, marking the official start of a "face protection conflict" concerning everyone's digital personality. 

Zhu Xinmei, director of the Institute of International Communication at the Development Research Center of the NRTA, noted that the worsening state of AI face theft is rooted in the sharp lowering of technical thresholds and the lag in regulatory governance. 

"AI face-swapping technology has completely moved beyond professional fields to become popular and automated," she told the Global Times on Tuesday. "The convenience of generating face-swapped content with one click allows anyone to complete face theft in a short time, while the speed of AI content generation far exceeds the capacity of manual review."  

She added that platforms' excuse of "being unable to keep up with the speed of content production" essentially stems from a lack of both technical prevention and responsibility. 

The core driver behind the persistent infringements, Zhu argues, is the serious imbalance between infringement costs and benefits. 

Infringers avoid the high cost of hiring celebrities by stealing their faces, while repeating substantial online benefits. This creates a vicious cycle that is difficult to break. She listed an example that although the Beijing Internet Court established the "identifiability" principle for infringement in Dilraba's case, the long cycle and high cost of judicial rights protection mean most infringers have already profited before being held accountable. 

Legal ambiguity further exacerbates the problem. Wu Xiaolin, a Beijing-based lawyer, pointed out that the most pressing difficulty in reconstructing rules concerning image rights in the AI era is the "permanent digital identity occupation" caused by face theft. 

"Without the technical means to block it, using legal measures to cut off the source of digital identity infringement will be the key breakthrough," he told the Global Times on Tuesday. 

Wu suggests clarifying the "identifiability" standard in the law by setting a 60 percent similarity threshold. Under this standard, any AI-generated face that can be linked to a specific individual should constitute infringement. 

He also proposed increasing the evidence weight of victim-provided materials and promoting electronic evidence preservation platforms to ease the burden of proof. 

To address these issues, Zhu emphasized the need for coordinated efforts from the legal framework, platforms and technology. She called for stronger legal regulations to clarify the protection of personal data assets and standards for AI-generated faces. She also urged platforms to establish technical prevention systems to intercept infringement at the source, while promoting  digital watermarking and traceability mechanisms. Echoing director Feng's proposal to "focus on both standardized management and application improvement," both experts agreed that shifting the burden of proof to creators is crucial. That is, creators - rather than victims - should be required to prove data compliance. 

The author is a reporter with the Global Times. life@globaltimes.com.cn