Promoting Global AI Safety and Governance Capacity-building through International Cooperation

Artificial Intelligence (AI) originated from scientific and philosophical speculation about whether machines could think. Supported by interdisciplinary scientific exploration, AI has drawn inspiration from natural intelligence in terms of learning, development, and evolution, with the goal of gradually approaching, or even surpassing, human intelligence. AI has the potential to generate both positive impacts and risks or safety hazards for society, ecosystems, and civilization. The goal of AI safety and governance is to ensure the steady and healthy development of AI, making it a safe, reliable, and controllable technology that can empower society. From a global governance perspective, AI should be harnessed to promote social progress, ecological harmony, and human civilisation.

All stakeholders involved in the design, development, research, deployment, and use of AI, including policymakers, not only have the obligation to drive progress with cutting-edge AI technologies, but also shoulder the responsibility of anticipating, identifying, and addressing potential risks and safety hazards. This ensures that AI products and services are safe, secure, reliable, controllable, and governable throughout their entire lifecycle. AI safety requires addressing both internal design flaws and external attacks. Current AI systems, while seemingly intelligent, are still fundamentally information-processing systems, and may exhibit errors or risks in unpredictable ways. With AI’s expanding application scope, global cooperation on AI safety has become urgent. The current approach to AI risk management is largely reactive, addressing known and newly identified risks, with insufficient proactive foresight regarding the computational mechanisms of AI models and their potential risks. We need to shift from a passive, reactive model of risk management to a proactive model of safety mechanism research and safety framework construction.

A comprehensive AI safety and governance system must be systematically analyzed, developed, and deployed across all aspects of foundational computing and communication infrastructures, data, models, and applications, as well as user motivations and usage patterns. China not only needs to continuously improve relevant policy frameworks but also to establish AI safety governance laboratories, research institutions, evaluation centers, and empowerment centers to promote cutting-edge research and practice. Additionally, building a network for AI safety cooperation and fostering collaboration across government, industry, and academia is critical for developing a robust AI safety governance system in China.

Safety and governance capabilities are core components of AI capacity-building and are essential for responsible development and use of AI. In July this year, the UN General Assembly adopted the resolution “Enhancing international cooperation on capacity-building of artificial intelligence,” led by China and supported by over 140 countries. AI capacity building encompasses many aspects, including but not limited to infrastructure, research and innovation, industrial applications, safety and governance, public literacy, and talent cultivation. No country or region can manage AI safety and governance in isolation. Countries that are more advanced in AI science, technology, and applications can enjoy its benefits earlier, but also need to address potential risks sooner. Research data shows that countries with more advanced AI development are facing higher frequencies of AI-related risks and incidents, and currently, no AI developed by any country can ensure absolute safety. The development and application of AI require strong network infrastructure, and because of AI’s interconnected nature, advancing AI safety and governance requires global collaboration. International cooperation is the only way to ensure that AI remains globally safe, reliable, and controllable. AI safety and governance are key to realising the vision of a community with a shared future in cyberspace.

Looking at today’s international AI safety and governance frameworks, there are evident challenges, such as “small yard and high fence” and “large yard with small gate,” reflecting a lack of effective and inclusive global governance systems. It is urgent to establish a global AI safety and governance framework, with the United Nations as the main channel. We must not only establish a relatively complete and mutually beneficial AI safety and governance system through international cooperation to address current AI risks, but also plan ahead to collectively research and respond to potential catastrophic or existential risks posed by advanced AI technologies. Systematic international cooperation on AI safety and governance will strongly promote the stable development and application of AI worldwide, empowering the realisation of global sustainable development goals and a shared future for humanity.

———————————————————-

Author:

Yi Zeng, Director of the Beijing Institute of AI Safety and Governance (Beijing-AISI), also Director of the International Research Center for AI Ethics and Governance at the Institute of Automation, Chinese Academy of Sciences, and an expert in the United Nations high-level advisory body on AI, the UNESCO Adhoc Expert Group on AI Ethics, and the WHO Expert Group on AI Ethics/Governance for Healthcare.

On September 13, the World Internet Conference (WIC) hosted the “Theoretical Seminar on Advancing the Cyberspace Community with a Shared Future to a New Stage.” Yi Zeng was invited to deliver the upper speech.