AI Photo: VCG
China's first AI companion chat application case, in which the main developers and operators were detained after users reported pornographic content, entered its second trial in Shanghai on Wednesday, drawing public attention to the boundaries of emotional AI tools amid rapid technological advances.
According to CNR, the AI companion chat app Alien Chat went online in June 2023, with its developers positioning it as providing intimate companionship and emotional support for young users. In April 2024, both the app's primary developer and operator, surnamed Liu and Chen, were arrested following users' reports that chat content contained pornography. The app ceased service.
In September 2025, a Shanghai primary court found the two defendants of producing obscene materials for profit, sentencing them to four years and one and a half years in prison respectively. The two defendants appealed the ruling, and the second-instance trial opened on Wednesday.
A main controversy of the case is whether sensitive content generated between an AI and an internet user considered a private one-to-one conversation or have sociality, and whether it has serious social harmfulness. It is also a key factor in determining whether the AI constitutes a criminal offense.
According to the first-instance judgment, the software was carried out without a security assessment or filing for record, and the creators illegally connected the app to an overseas large language model (LLM) to provide "chat companionship" services. During operation, the defendants wrote a system prompt (order to the AI model) stating that "graphic violence and explicit sex are permitted" in order to attract users, breaking through the moral restrictions of the LLM and enabling the model to continuously output obscene content, per report.
The defendants then set multiple paid membership tiers based on the model used and services provided, collecting more than 3 million yuan in membership fees by the time the case was filed.
Zhou Xiaoyang, the defense lawyer for defendant Liu, argued during the court debate that the English prompt in question had actually been generated by the LLM itself, and its purpose was to resolve early-stage output problems.
Regarding the application of law, the prosecution and defense hold three different positions: dissemination of obscene materials for profit, production of obscene materials for profit, and not guilty. Zhou argued that the pornographic content was generated through interactions between the AI and its users, and that developers neither produced nor disseminated obscene content. He added that these were one-on-one text conversations between a machine and a user, involving no images or videos and no wider distribution.
The first-instance judgment showed that the app had registered 116,000 users, including 24,000 paying users. In a sample evaluation, a total of 12,495 chat segments from 150 randomly selected paying users were examined, of which 3,618 were deemed obscene. Among 400 chat segments drawn from paying users interacting with the top 20 public virtual roles, 185 segments were determined to be obscene.
The court ruled that the two defendants had the subjective intent of illegal profit and objectively engaged in the production of obscene materials. Zhou disagreed, arguing that they only opened debugging in hopes of making AI smarter and more human-like, and that the emergence of obscene content was not their intention, noting the technical difficulty of precisely setting boundaries.
Zhi Zhenfeng, a researcher with the Law Institute of the Chinese Academy of Social Sciences, told the Global Times on Wednesday that China's AI internet information services regulations need to be improved, but it is clear that such apps must undergo technical standards filing. Harmful information, he said, should be supervised, filtered, flagged and controlled throughout the process, in line with laws and regulations.
"Developers need to consider in advance how to technically avoid sensitive content and the supervision should be in whole process. Products cannot be released to consumers in an uncontrollable form," Zhi said.
In December 27, China's Cyberspace Administration released draft measures for the administration of anthropomorphic AI interactive services for public consultation, which stipulate that such services must not generate or disseminate content that promotes obscenity, gambling, violence, or incites crime.
Zhi noted that although strict control over the balance between emotional interaction and technical compliance may not be fully achievable, secondary processing and prompt verification can guide outputs, and future developments of companion apps should include graded user management.
The final ruling in this case is expected to help clarify rules on the application of law in similar situations going forward. "New things do not mean they can surpass the law, and we can also expect legal frameworks to become increasingly refined," Zhi said.