China to solicit public opinions on security requirements for generative AI service, propose to establish a corpus source blacklist
Published: Oct 13, 2023 12:52 AM
AI Photo:VCG


China has initiated the solicitation of public opinions on national security requirements for generative artificial intelligence (AI) services, aiming to enhance the security standards of generative AI service providers catering to domestic users.

China's National Information Security Standardization Technical Committee has worked out a draft of basic security requirements for generative AI service and started to solicit public opinions from Wednesday till October 25.

The requirements outline fundamental security criteria for generative AI services, covering corpus security, model security, security measures and security assessments. These criteria are applicable to providers offering generative AI services to the public within China. Service providers can conduct security assessments independently or engage third-parties to do so, in accordance with these criteria. The requirements can also serve as reference for relevant regulatory authorities when evaluating the security level of generative AI services.

The requirements propose to establish a blacklist of corpus sources and refrain from using data from the blacklisted sources for training purposes.

The requirements suggest conducting a security assessment on corpora from various sources, and if the content of a single source contains more than 5 percent illegal or harmful information, it should be blacklisted.

When using corpora containing personal information, the providers should obtain authorization and consent from the corresponding individuals or meet other conditions for legally using such information.

When providers use corpora containing biometric information such as faces, they must obtain written authorization and consent from the corresponding individuals or fulfill other legal conditions for the use of such biometric information.

The providers should assess the AI data annotators and grant annotation qualifications to those qualified, and establish the mechanism of regular retraining the annotators and suspend or revoke their qualifications if necessary.

During the data training process, safety of the generated content should be considered as one of the major evaluation criteria for assessing the quality of generated results.

For services provided through an interactive interface, information about the target audience, occasions and purposes for which the service is applicable, as well as the application information about the third-party base models should be disclosed to the public in a prominent location such as the homepage of the website:

Besides, the providers should fully verify the necessity, applicability, and safety of applying generative AI in various fields within the service scope.

Previously, China's seven authorities including the Cyberspace Administration of China, the National Development and Reform Commission, the Ministry of Education, the Ministry of Industry and Information Technology, the Ministry of Public Security and the National Radio and Television Administration released the interim measures for the management of generative AI services and enacted the measures on August 15.

The measures stipulate that the service providers must not use algorithmic, data, platform, or other advantages to engage in monopolistic and unfair competition practices. The measures also prohibit harming the physical and mental health of others, or infringing upon others' image, reputation, honor, privacy rights or personal information rights and interests.

The measures also require to adopt effective measures to enhance the transparency of generative AI services and improve the accuracy and reliability of the generated content.

Global Times