Global Cooperation on Artificial Intelligence is not a Zero-Sum Game

by Yi Zeng

In the past 60 years, scientific research on artificial intelligence (AI) has evolved from simulating human intelligence to an enabling technology that advance human and social development. AI, a technology crucial to global digital advancement, is now the developmental priority of many countries and intergovernmental organizations. In the meantime, the impact of AI on science, technology and policy is profoundly influencing international relations and the global landscape.

However, some countries and intergovernmental organizations are concerned that their leading position in the field of AI will be challenged in an era of global development, or mistakenly assume that the development of AI is a zero-sum game. As a result, recent trends have shown that the scientific and technological exchanges between scholars and the industries in different geolocations are being negatively impacted. Some countries are even being tried to be isolated from others in the field of AI development.

AI promises wide applications in countries across the globe and in all walks of life. It is an important channel that establishes ties between different countries, enabling cultural communication and interactions. For this reason, countries across the globe are working hard on the formulation and implementation of strategies that would promote the development and application of AI related technologies. In recent years, China has steadily increased fundamental R&D capacity, continuously expanding potential applications of AI. This has opened up more opportunities and secured advantages in the international community. In this context, the State Council issued the “New Generation Artificial Intelligence Development Plan,” which laid out the groundwork for China to become a premier AI innovation center of the world by 2030. Yet it should be noted that no country could possibly become the sole center of global AI development. The global AI network of the future will feature a group of major centers for basic research, industrial R&D and application services. These premier research centers will feature high interconnection and intensive collaboration, which will jointly drive forward the global development of AI.

The application of AI will serve all humankind, influencing future social development and human destiny. Like the large quantity of research-based groundbreaking technologies that have influenced the future of humankind, the development of AI is not a zero-sum game. Take brain science as an example. It might still take hundreds of years for scientists to decode the brain’s structures and cognitive functions to reveal the nature of human intelligence. This is why neuroscience researchers across the world have joined hands to develop institutions like the International Brain Initiative (IBI) to coordinate the development of brain research projects in different countries. This organization has enabled cooperation and resource sharing. The Brain Observatories have been instituted, promoting global data sharing of brain research and scientific discoveries. It takes the world to decode the brain. Likewise, and it is even more essential to establish a mechanism for long-term international cooperation on AI to advance the development of humankind.

The principles of AI governance serve as an important bridge and fundamental policy for countries to carry out mutually aided AI development. In the “Governance Principles for the New Generation Artificial Intelligence — Developing Responsible Artificial Intelligence,” China proposed to promote global sustainable development through AI. China will do so by putting forward the principles of harmony and human-friendly, inclusion and sharing, openness and collaboration. These concepts represent China’s vision for the development of next-generation AI, and China’s policy for global AI cooperation.

Inclusive development is based on the recognition of diverse cultures and values, thus necessitating cultural interaction, mutual understanding, increased consensus building and managing differences against the backdrop of diverse cultures. For example, the concept and expression of “harmony” is widely used in Chinese, Japanese, South Korean and African philosophy and cultures. The understanding and development of “harmony” in different cultures enriches and deepens the implications of relevant concepts in modern society. This shared understanding is likely to inspire the harmonious coexistence between human beings and AI in the future. It is a shared goal of humankind to achieve sustainable development, irrespective of differences in values and cultures or regional and political divergences. All parts of the world should seek to realize environmentally sustainable, AI-enabled development of society and the economy. To this end, countries need to build on the basic research of AI to promote exchanges. The human society will benefit from sharing AI evaluations, application scenarios, best practices and the various efforts on technological, social and ethical risks, among others.

The core of fundamental AI research lies on innovations of models and algorithms. These are generally considered to be the strategic high ground of competition among countries. Nevertheless, even in this pivotal field, the adequate sharing of outcomes remains to be the best solution to speed up innovation. Regarding technical AI, challenges based on the same data and tasks, plus regular international symposiums and competitions, are important tools to foster virtuous competition. This allows AI researchers and engineers to inspire each other to come up with innovative ideas for the future, bringing forward better models and algorithms. Open-source algorithms and systems are the most effective tool to maximize the impact of innovation. Only in this way can we employ research and technology communities to push forward the rapid iteration of AI algorithms and systems. As a representative example, the extensive development and application of deep learning models and systems worldwide is driven by relevant open-source communities. These directly benefit creators of said algorithms and systems, bringing about the honor and recognition they deserve. As the history of technological development shows, community contributors from countries all over the world will be able to identify and repair loopholes and safety hazards lurking in algorithms immediately through open-source code and models. Open ecosystems are gradually taking shape in the field of AI, fueling the fast development of AI technologies. Through collaboration, AI innovators, policymakers and governments across the globe are obliged to carry forward and maintain these mechanisms and platforms for virtuous competition and development.

The current development of AI is still highly uncertain. Countries need to jointly cope with the technical safety, security and ethical issues arising out of its development. Otherwise, many regions will fall victim to the negative impacts of technological advancements, regardless of previous lessons learned in other countries and regions. For example, some middle schools and universities in China adopted an application of AI-enabled automatic facial expression recognition in classrooms. However, this infringed on the privacy of students, negatively impacting the interactions between students and teachers. Eventually, various education departments and members of the public spoke out against the system, resulting in its suspension. Unfortunately, these relevant experiences and responses were underappreciated by other countries. The United States and some European countries let similar cases repeat themselves in the following months. That being said, AI should not be completely banned in schools. The critical issue is whether we can protect the privacy of individuals while simultaneously employing technology to benefit all. For example, Japan has set a good precedent for other countries, as some Japanese primary and middle schools use AI to prevent school bullying. In addition, there are both long-standing technological and cultural risks lurking in different paths to realize AI. For example, AI is primarily considered to be a tool in the West, but Japan regards the technology as a companion and quasi-member of the society. In most science fictions, AI is often depicted as the common enemy of humankind. In the long run, different visions for the relationships between AI and humankind will generate different types of risks in the course of technological development. As such, it is of vital importance to enter into global collaboration to circumvent such risks, jointly examine long-term issues and forestall risks. On top of that, we should make efforts to prevent vicious competition, misuse, abuse and potential evil use and applications of AI against great uncertainties during the technological development.

China is dedicated to becoming one of the world’s premier AI innovation centers. On this account, we need to first clarify two specific considerations. To begin with, every center has different developmental concepts, advantages and priorities. Therefore, it is critical that they learn from each other. The US and United Kingdom boast prominent advantages with regards to being the original innovators of the basic theories of AI. European countries have generally laid a solid foundation of AI ethics and governance. Japan has secured outstanding achievements in the relationships between human beings and AI development. China, as one of the leading centers of AI development, should assume the responsibility for empowering the development of AI in less-advantaged countries, especially those that are still developing, to deliver the benefits of AI to every corner of the world. Other countries may draw on these insights as well.

Both Chinese and British scholars have observed that despite differences in values, principles or other unsolved divergences, cross-cultural cooperation on AI it is still effective as long as a consensus is reached on the handling of specific issues. The philosophy of seeking harmony without uniformity and providing mutual support through tough times provides communication channels for technological development, regardless of cultural and historical differences.

A global, interconnected landscape of AI development featuring multiple main centers is bound to emerge in the future. To promote this vision, countries should engage in cooperation to strengthen mutual trust, thus avoiding misunderstandings. In order to realize the global sustainable development of humankind, society, the environment and technology, we should give full expression to the positive role of highly inclusive international organizations such as the United Nations. To realize the global development of AI technology and its applications to serve the well-being of humankind and a better future for all of us, we must stand and hold together as a human community with a shared future.

About the Author

Yi Zeng

Professor, Institute of Automation, Chinese Academy of Sciences.

Director of Research Center for AI Ethics and Sustainable Development, Beijing Academy of Artificial Intelligence.

Chief Scientist, Institute for AI International Governance, Tsinghua University.

Adjunct Research Fellow, Center for International Security and Strategy, Tsinghua University.

A shortened Chinese version of this opinion piece is published in Guangming Daily, Jan 20th, 2021.

https://epaper.gmw.cn/gmrb/html/2021-01/20/nw.D110000gmrb_20210120_3-02.htm

Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance

Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. In a paper titled “Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance”, which is a joint work of members from China-UK Research Centre for AI Ethics And Governance, University of Cambridge (UK), Beijing Academy of Artificial Intelligence (China), Chinese Academy of Sciences (China), Peking University (China), focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact on the development of AI ethics and governance. We suggest that there is reason to be optimistic about achieving greater cross-cultural cooperation on AI ethics and governance. We argue that misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed. Even where fundamental differences exist, these may not necessarily prevent productive cross-cultural cooperation, for two reasons: (1) cooperation does not require achieving agreement on principles and standards for all areas of AI; and (2) it is sometimes possible to reach agreement on practical issues despite disagreement on more abstract values or principles. We believe that academia has a key role to play in promoting cross-cultural cooperation on AI ethics and governance, by building greater mutual understanding, and clarifying where different forms of agreement will be both necessary and possible. We make a number of recommendations for practical steps and initiatives, including translation and multilingual publication of key documents, researcher exchange programmes, and development of research agendas on cross-cultural topics. The authors would like to thank the participants of the July 11–12 Cross-cultural trust for beneficial AI workshop for valuable discussions relevant to the themes of the paper, as well as Emma Bates, Haydn Belfield, Martina Kunz, Amritha Jayanti, Luke Kemp, Onora O’Neill, and two anonymous manuscript reviewers for helpful comments on previous drafts of this paper.

The paper is available at: https://link.springer.com/article/10.1007/s13347-020-00402-x

Facial Recognition and Public Health Management Survey

Facial recognition technology plays an active role in different situations such as national and social security and daily life, but it also brings hidden dangers and challenges to privacy, security and so on. During public health-related events such as COVID-19, automatic detection techniques such as facial recognition play an active role in prevention and control. Applications involving personal information and privacy should be with legal and agile governance. The survey hopes to solicit comments and suggestions on the application of facial recognition in daily and public health events.

This survey is for scientific research purposes only. The results will be published as white papers and/or open access articles. No personally identifiable information will be collected in this survey.

This survey is a collaborative work among Beijing Academy of Artificial Intelligence, China-UK Research Centre for AI Ethics and Governance, and Schwarzman College of Tsinghua University.

We are grateful to hear your voice at https://forms.gle/9U5Lc6krFrpE9Ubr9

The Official Launch of ChinUK Centre for AI Ethics and Governance

On November 4th, 2019, a new bilateral China-UK Research Centre for AI Ethics and Governance (“ChinUK”) was officially launched. Led by Professor Yi Zeng of the Institute of Automation (CASIA) at the Chinese Academy of Sciences, and Professor Huw Price of LCFI, ChinUK aims to support exchanges and collaboration between scholars in China and the UK, in the new interdisciplinary field studying the ethics and impacts of AI.

Professor Price exchanged copies of a new MoU with Professor Cheng-Lin Liu, Vice-President of CASIA, at the ChinUK inauguration ceremony in Beijing on Monday 4 November. LCFI researchers Martina Kunz, Yang Liu and Danit Gal, as well as Huw Price, presented talks at the first ChinUK joint workshop. 

Huw Price China 2019

Professor Zeng said: “ChinUK is a cross-cultural and transdisciplinary centre on AI Ethics and Governance. Its mission is to build bridges between Eastern and Western perspectives on AI, linking scholars in China and UK for sharing, interaction, and complementary research. In this way the new CASIA-LCFI partnership will help to promote the development of AI for humanity, ecology, and social good, and for a shared future of Human-AI Symbiosis.”

Professor Price said: “We are delighted to be linked to CASIA in this way. Collaborations between China and the West will be vital in ensuring that AI Is safe and beneficial. China will be a leader in the development of AI, and its voice will be just as important in global conversations about the impacts of this technology. ChinUK will support these conversations.”

Huw Price China 2019 1

Linking Artificial Intelligence Principles

Various Artificial Intelligence Principles are designed with different considerations, and none of them can be perfect and complete for every scenario. Linking Artificial Intelligence Principles (LAIP) is an initiative and platform for synthesizing, linking, and analyzing various Artificial Intelligence Principles World Wide, from different research institutes, non-profit organizations, non-governmental organizations, companies, etc. The efforts aim at understanding in which degree do these different AI Principles proposals share common values, differ and complete each other.

The current LAIP engine enables the users to list and compare between different AI principle proposals at keywords, topic and paragraph levels.

The Platform can be accessed from HERE.

Here we have the paper “Linking Artificial Intelligence Principles” that gives more details on the design philosophy and initial observations.

Establishing a China-UK Initiative for AI Ethics and Governance

The China-UK Research Centre for AI Ethics and Governance (The ChinUK Centre, for short) is a cross cultural and transdisciplinary Centre. Through linking Eastern and Western wisdom, the Centre is aimed at Bridging efforts from China and UK to share, interact, and complement efforts about AI Ethics and Governance to ensure AI develops for humanity and social good. The Centre is hosted at Innovation Academy of Artificial Intelligence, Chinese Academy of Sciences.

The characteristic contribution of the center will be complementary, cross cultural, social and technical investigation on AI Risks, Safety, Ethics, and Governance. China and UK are representatives of Eastern and Western culture and society respectively, and will face not only similar, but also very diverse challenges in the process of AI development and applications in different society. We are very excited to see how the social and technological efforts from two countries can be shared and through interaction, be bridged together for a more global vision, and finally achieve the same goal for building beneficial AI for humanity and society. The recent plan for concrete research of this centre are listed but not limited to the following:
1. Linking, Synthesizing, and Analyzing Global Ethical Principles of Artificial Intelligence, Robotics, and Autonomous Systems.
2. Building Technical Models for AI with Low risk, High Safety and Ethical AI.