OPINION / VIEWPOINT
Global AI regulations needed urgently to ensure future human safety
Published: Jun 04, 2023 04:12 PM
Illustration: Liu Rui/GT

Illustration: Liu Rui/GT

Artificial intelligence (AI) is undoubtedly a driving force for the advancement of society, particularly as one of the key enabling technologies to advance global sustainable development. However, this does not mean that AI does not carry potential risks or that the need to maximize the benefits of AI should lead us to ignore those potential risks. Paying attention to and controlling the safety risks of AI is not to hinder its development and applications, but to ensure steady and healthy development of AI technology.

The recent joint declarations by international scholars and industry experts titled "Pause Giant AI Experiments: An Open Letter" and the "Statement on AI Risks" are not aimed at impeding the development of AI, but rather exploring the pathways for its steady and healthy development.

Most people expect the development and use of AI to benefit humanity. They want to enjoy the advantages it brings, not suffer from the risks it may cause, let alone the existential ones. Therefore, all of human society has the right to know the potential dangers of AI, and AI developers have an obligation to ensure that AI does not pose existential threats to humanity, at least by minimizing the possibility of such risks through efforts together with all stakeholders of AI. Currently, perhaps only a minority have signed this declaration to raise public awareness, but eventually, it will be the majority to participate in changing the status quo.

The potential existential risks brought to humanity by pandemics, nuclear war, and AI share one commonality: They are hard to be accurately predicted; they have a wide area of impact; they are concerned with the interests of all humankind; and they even possess widespread lethality. Regarding the existential risks that AI may pose to humans, there are at least two possibilities: one is long-terms concern about AI, and the other is the short-term risks.

In the long term, when Artificial General Intelligence (AGI) and super intelligence arrive, their levels of intelligence may far exceed that of humans, many believe that superintelligence will soon compete with humans for resources and may even threaten human survival. However, the short-term concern about AI is what we need to focus on more urgently. As contemporary AI is merely an information processing tool that seems intelligent, without real understanding or intelligence, it can make errors in ways that are unpredictable to humans.

When certain operation then threatens human survival, AI understands neither what humans are, what life and death are, nor what survival risks mean. In such a scenario, AI is very likely incapable of "realizing" it, and humans may not be able to perceive it in time, posing a widespread threat to human survival.

It is also very likely that AI can exploit human weaknesses to pose a lethal crisis to human survival, such as exploiting and exacerbating hostility, prejudice, and misunderstanding among humans, and the threat of lethal autonomous weapons based on AI to human lives. Such AI could pose a risk to human survival even without reaching the stage of AGI or superintelligence. This kind of AI could very likely be maliciously utilized, misused, and abused by people, and the risk is nearly impossible to predict and control.

Particularly, recent advancements in AI enable AI systems to exploit internet-scale data and information. Synthetic disinformation by generative AI has greatly reduced social trust. With the interconnectedness of all things through network communication, related risks can be magnified on a global scale. 

The current race in the development of AI is in full swing, while the prevention of safety and ethical risks of AI is stretched thin. The "Statement of AI Risks" should first resonate with the developers of AI, who should solve as many potential safety risks as possible by developing and releasing AI safety solutions. Second, it should maximize awareness of potential AI safety risks among all stakeholders, including but not limited to developers, users and those who deploy it, including governments, the public, and the media, turning all stakeholders into participants safeguarding the steady and healthy development of AI.

Moreover, for all possibilities that AI could pose existential risks to humans, we should conduct thorough research and extreme and stress testing to minimize these risks to the greatest extent. To solve the problem of existential threats that AI brings to humans and ensure AI develops ethically and safely, we need to establish a global collaboration mechanism, such as establishing an international committee on AI safety with the participation of all countries. While sharing the benefits of AI, global safety should be jointly guarded, so that the human society as a whole could effectively utilize AI empowerment while ensuring its steady and healthy development.

The author is a professor at the Institute of Automation of the Chinese Academy of Sciences and an expert in the UNESCO Ad Hoc Expert Group on AI Ethics. opinion@globaltimes.com.cn