Artificial intelligence (AI) is undoubtedly a driving force for the advancement of society, particularly as one of the key enabling technologies to advance global sustainable development. However, this does not mean that AI does not carry potential risks or that the need to maximize the benefits of AI should lead us to ignore its potential risks.Paying attention to and controlling the safety risks of AI is not to hinder its development and applications, but to ensure steady and healthy development of AI. The recent joint declarations by international scholars and industry experts titled “Pause Giant AI Experiments: An Open Letter” and the “Statement on AI Risks” are not aimed at impeding the development of AI, but rather exploring the pathways for its steady and healthy development. The majority of people develop and use AI with the vision of benefiting humanity and enjoying the advantages it brings, not to introduce risks, let alone existential risks. Therefore, all of the human society have the rights to know the potential risks of AI, and AI developers have the obligation to ensure that AI does not pose existential risks to humanity, at least through minimizing the possibility of such risks through efforts together with all stakeholders of AI. Currently, perhaps only a minority have signed this declaration, but when a minority first steps forward to raise public and global awareness, the majority will eventually participate in changing the status quo.
The commonalities of potential existential risks brought to humanity by pandemics, nuclear war, and AI are that they are hard to be accurately predicted, they have a wide range of impact, they are concerned with the interests of all humankind, and they even possess widespread lethality. Regarding the existential risks that AI may pose to humans, there are at least two possibilities: one is the concern about long-term AI, and the other is the concern about short-term AI.In the long term, when Artificial General Intelligence (AGI) and superintelligence arrive, as their levels of Intelligence might far exceed that of humans and they may view humans as ants, many believe that superintelligence will compete with humans for resources and may even threaten human survival. However, the concern about short-term AI is what we need to focus on more urgently. As contemporary AI is merely an information processing tool that seems intelligent, without real understanding or Intelligence, it can make errors in ways that are unpredictable to humans. When an operation threatens human survival, AI neither understands what humans are nor what life and death are, and does not understand what survival risks means. In such a scenario, AI is very likely incapable of “realizing” it, and humans may not be able to perceive it in time, posing a widespread threat to human survival. It is also very likely that AI can exploit human weaknesses to pose a lethal crisis to human survival, such as exploiting and exacerbating hostility, prejudice, and misunderstanding among humans, and the threat of lethal autonomous AI weapons to relatively very weak human lives. Such AI could pose a risk to human survival even without reaching the stage of AGI or superintelligence. This kind of AI could very likely be maliciously utilized, misused, and abused by people, and the risk is nearly impossible to predict and control.Particularly, recent advancements in AI enable AI systems to exploit Web-scale data and information. The synthetic disinformation generated by generative AI has greatly reduced social trusts. With the interconnectedness of all things through network communication, related risks can be magnified on a global scale. If we start investigating now how to avoid the challenges of long-term AI, we might be able to deal with them. However, the risks of short-term AI seem much more pressing.
The current race in the development of AI is in full swing, while the prevention of safety and ethical risks of AI is stretched thin. The “Statement of AI Risks” should first resonate with the developers of AI, who should solve as many potential safety risks as possible by developing and releasing AI safety solutions. Secondly, it should maximize awareness of potential AI safety risks among all stakeholders, including but not limited to developers, users and deployers, governments, the public, and the media, turning all stakeholders into participants safeguarding the steady and healthy development of AI. Moreover, for all possibilities that AI could pose existential risks to humans, we should conduct thorough research, extreme and stress testing to minimize these risks to the greatest extent. To solve the problem of existential risks that AI brings to humans and ensure AI develops ethically and safely, we need to establish a global collaboration mechanism, such as establishing a Global AI Safety Committee with participations of all countries, leaving no one behind. While sharing the benefits of AI, global safety should be jointly guarded, so that the human society as a whole could effectively utilize AI empowerment while ensuring its steady and healthy development.
Author: Yi Zeng
- Professor and Director of Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences;
- Director of the International Research Center for AI Ethics and Governance;
- Director of Center for Long-term AI;
- Member of National Governance Committee of New Generation AI in China;
- Member of UNESCO Adhoc Expert Group on AI Ethics
This post is published under the title of “Global AI regulations needed urgently to ensure future human safety” on Global Times on June 4th, 2023. https://www.globaltimes.cn/page/202306/1291899.shtml