AI for the good of international peace and security

There is no doubt that AI is a powerful, enabling technology which can be used to push forward global sustainable development. When investigating the use of AI for Sustainable Development Goals (SDGs), we find that most of the efforts are on AI for quality education and AI for healthcare, while less efforts have been put on AI for Biodiversity, Climate Actions and AI for Peace. In my opinion, these are the essential topics for the future of humanity, and governments must work together on these important topics.
Military AI, AI for peace and security are all closely related. However, in certain aspects they are fundamentally different. As an essential pillar of SDGs, we should push AI forward for international peace, and reduce, not enhance, security and safety risks. When thinking about AI for the good from a peace and security perspective, it will be much better to make efforts on using AI to identify disinformation, misunderstanding among different countries and political bodies, and use AI for network defense, not attack, instead of seeking ways to create disinformation for military and political purpose. AI should be used to connect people and cultures, not to disconnect them. And this is why we created an AI enabled Culture Interactions Engine to find commonalities and diversities among different UNESCO world culture heritages. From these world heritages, we find that culturally we are not that disconnected. These commonalities serve as roots, and help us to appreciate and understand, and even learn from these many diverse cultures.
The current AIs, including recent generative AIs, are all information processing tools that seem to be intelligent. But without real understandings, they are not truly intelligent. Obviously, this is why they cannot be trusted as responsible agents that can help humans to make decisions. For example, although the world has not reached a consensus on Lethal Autonomous Weapon Systems, at least AI should not be used to directly make decisions leading to the destruction of human life. Effective and responsible human control must apply for sufficient human-AI interactions. Also, AI should not be used for automating diplomacy tasks, especially foreign negotiations among different countries. AI in this way could be catastrophic for mankind. It is very funny, misleading and irresponsible that dialogue systems powered by generative AI always argue “I think” and “I suggest”, while there are no I, or even no “me” in the AI models, hence again to emphasize, AI should never pretend to be human, take the human position or mislead humans to have an incorrect perception of AI. We should use generative AIs to assist, but never trust them to replace humans in their decision making.
We must ensure human control for all AI-enabled weapon systems, and the human control has to be sufficient, effective and responsible. For example, cognitive overload during Human-AI interactions has to be avoided. We must prevent the proliferation of AI-enabled weapon systems since related technology is very likely to be maliciously used or even abused.
In the short-term and the long-term, the risk of AI replacing and causing the extinction of humankind will be present. This is because in the short-term, we haven’t find a way to protect ourselves from AI’s utilization on human weakness. AIs are using the technology, but they do not have an understanding of human life or death. And in the long-term, we haven’t given superintelligence any practical reasons why they should protect humankind, which may take decades to achieve. Our preliminary research suggests we may need to change the way we interact with each other, as well as with other species, the ecology and the environment, which will require all humans to work together. 
Due to these short-term and long-term challenges, I am quite sure that we cannot solve the AI for peace and security issue today. However, although challenging, this discussion may be a good starting point for member states. Here I would suggest the UN Security Council consider the possibility of having a working group on AI for peace and security, working on short-term and long-term challenges, since at the expert level, it would be more flexible and scientific to work together and easier to reach a consensus from a scientific and technical point of view, and to provide assistance as well as supports for the council member countries to make decisions. The Security Council members should set a good model and play an important role on this important issue.
AI is proposed to help man solve problems, not create more problems. I was asked by a boy, besides its appearance in SCI-FI, whether a nuclear bomb assisted by AI can be used to blow up the asteroid attacking the earth or alter its trajectory to avoid a collision with the earth to save our lives. I think although the idea may not be scientifically solid and very risky at this point, it is at least using AI to solve a problem for mankind, which is at least much better than empowering AI to attack each other with nuclear weapons on this planet, which creates problems for mankind and the future of mankind. In my own view, humans should always maintain and be responsible for final decision-making on the use of nuclear weapons. We have already affirmed that “a nuclear war cannot be won and must never be fought”. Many countries have announced their own strategy and opinions towards AI for security and governance in general, including but not limited to the five permanent members of the United Nations Security Council, and we can see there are commonalities, and serve as important inputs for international consensus, but this is still not enough. The United Nations must play a central role to set up a framework on AI development and governance, to ensure global peace and security. As a shared future for all, together we must set up an agenda and framework, leaving no one behind.

The author Yi Zeng is Professor and Director of International Research Center for AI Ethics and Governance, Institute of Automation, Chinese Academy of Sciences. This is the briefing of the author on the UN Security Council meeting on AI: Opportunities and Risks for International Peace and Security on July 18.

This post has been published on Global Times on July 19th.