The AI Safety Process

The efforts of AI Safety starts with academic papers and conferences at least 10 years ago (At least back to the 2015 AI Safety Conference, in Puerto Rico [1]) by academia and industry, drawing rapid increasing attentions by emerging safety risks and open letters especially from these two years [2,3]. And heavily influenced by all of these, governments are now leading AI safety related Summits and initiatives [4,5], while the change of governments causes uncertainties for the continuity of the efforts.

For the future, these efforts need to be transformed into an enduring, inclusive, community-driven AI Safety Process that allows for flexibility to account for different perspectives, ensuring consistency, coordination and scalability in AI safety efforts across nations and across time. The AI Safety Process should be for good and for all, while should not be owned by any one country, organisation, but through continuous collective efforts with all stake holders in a coordinated framework across time and space, realising, promoting, and supported by various contributions from academia, industry, civil society, governments, intergovernmental organisations, etc.

AI Safety is only one of the perspectives for AI Governance, and it has to be closely taken into considerations both technically and socially, together with AI Development and Use, while at the same time, it should be realised that AI is an enabling technology for societal and ecological development, and AI Safety is to ensure the technology and services are safe and secure to support these development. Credible intergovernmental organisations should coordinate and play a major role to keep the stability, continuity, effectiveness of the AI Safety Process, for good and for all.

References

[1] https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/

[2] Pause Giant AI Experiment: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[3] Statement on AI Risk. https://www.safe.ai/work/statement-on-ai-risk

[4] The 2023 AI Safety Summit. https://www.gov.uk/government/topical-events/ai-safety-summit-2023

[5] The 2024 Network of AISI meeting. https://www.nist.gov/news-events/news/2024/11/fact-sheet-us-department-commerce-us-department-state-launch-international

Author:

Yi Zeng, Director of the Beijing Institute of AI Safety and Governance (Beijing-AISI), also Director of the International Research Center for AI Ethics and Governance at the Institute of Automation, Chinese Academy of Sciences, and an expert in the United Nations high-level advisory body on AI, the UNESCO Adhoc Expert Group on AI Ethics, and the WHO Expert Group on AI Ethics/Governance for Healthcare.