AI Photo:VCG
China on Monday issued the 2.0 edition of the Artificial Intelligence (AI) Safety Governance Framework, according to an article published by the Cyberspace Administration of China (CAC) via its official social media account, aiming to further enhance AI-related risk grading, controls and safeguards to ensure the steadfast development of the cutting-edge sector.
Building on the framework 1.0 released on September 9, 2024, framework 2.0 incorporates the latest developments and practical applications of AI technology. It continuously tracks emerging risks, refines and optimizes risk categories, explores risk grading strategies, and dynamically updates measures for prevention and governance, said the article.
The framework 2.0 further refines technical standards and strengthen ethical reviews to provide differentiated governance frameworks for applications with varying risk levels, Wang Peng, associate research fellow at the Beijing Academy of Social Sciences, told the Global Times on Monday, noting that it integrates ethical requirements into technical standards to guide technology toward positive development.
The release of framework 2.0 aligns with global AI development trends and combines technological innovation with governance practices. It strengthens consensus on AI security, ethics, and governance, and helps foster a safe, trustworthy, and controllable AI ecosystem. The framework also aims to establish a collaborative governance system that spans borders, sectors, and industries, said an official with the CAC.
At the same time, framework 2.0 helps promote cooperation on AI safety governance through multilateral mechanisms. It supports the global, inclusive sharing of technological achievements and ensures that the benefits of AI development are widely enjoyed by society, said the official.
Compared with the previous version, Framework 2.0 provides more detailed guidance on the classification of AI safety risks, technological countermeasures to address risks, and comprehensive governance measures, such as derivative safety risks from AI application and safeguards against application-related secondary safety risks.
In terms of comprehensive governance measures, Framework 2.0 adds four new measures, in order to formulate and refine comprehensive AI safety risk governance mechanisms and regulations that engage multi-stakeholder participation, including technology research and development institutions, service providers, users, government authorities, and social organizations, according to the document.
Additionally, the framework 2.0 introduces the grading principles for AI safety risks and fundamental principles for trustworthy AI, helping to assess and manage risks generated during AI development and deployment.
These new contents respond to new risks brought by the rapid evolution of AI technology, while also highlighting the most challenging management issues in current AI governance, Chen Jing, vice-president of the Technology and Strategy Research Institute, told the Global Times on Monday.
Chen pointed out that the highlight of the framework 2.0 is the shift from "static compliance" to "dynamic governance," representing a significant advance in AI management thinking.
The upgraded version covers multiple dimensions, including data security, system security, cognitive-domain attacks, real-world threats, and ethical challenges, he said. "Compared with the previous version, framework 2.0 provides more detailed and comprehensive risk classifications, reflecting lessons learned from early practical experience," Chen noted.
The framework further strengthens the enforcement of governance standards, enhances AI safety and trustworthiness, encourages safe innovation, promotes inclusive applications, and builds a healthy and orderly AI ecosystem, said Wang.
The number of Chinese AI companies has surged from just over 1,400 to more than 5,000 in the past five years, according to data released by the Ministry of Industry and Information Technology.
China is not only advancing its own AI development but also taking an active role in global AI governance, promoting international collaboration and sharing standards to ensure that AI benefits all of humanity.
On July 26, the 2025 World Artificial Intelligence Conference and High-Level Meeting on Global AI Governance released the Global AI Governance Action Plan. The Action Plan outlines six core principles and 13 concrete actions, reflecting broad international consensus. It is poised to inject renewed momentum into the global development and governance of AI technology.
In global AI governance, China has shifted from actively participating to contributing rules, and is transforming its domestic AI governance toolbox into an international public good, said Chen.