The Ethical Norms for the New Generation Artificial Intelligence, China

On September 25th, the National Governance Committee for the New Generation Artificial Intelligence published the “Ethical Norms for the New Generation Artificial Intelligence”. It aims to integrate ethics into the entire lifecycle of AI, to provide ethical guidelines for natural persons, legal persons, and other related organizations engaged in AI-related activities.

This set of ethical norms has undergone research survey, concentrated drafting, calling for opinions and feedbacks, and has fully considered the current ethical concerns of privacy, prejudice, discrimination, and fairness from all walks of life in the society. This set of ethical norms includes general principles, ethical norms for specific activities, organization and implementation guidelines. This set of ethical norms puts forward 6 fundamental ethical requirements, such as enhancing the well-being of humankind, promoting fairness and justice, protecting privacy and security, ensuring controllability and trustworthiness, strengthening accountability, and improving ethical literacy. At the same time, 18 specific ethical requirements for specific activities such as management, research and development, supply, and use of AI are put forward. The original Chinese version is at the original Chinese version at: http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html

The full text of the set of norms is as follows:

Ethical Norms for the New Generation Artificial Intelligence

This set of norms is formulated in order to deeply implement the “New Generation Artificial Intelligence Development Plan”, to refine and implement the ” Governance Principles for the New Generation Artificial Intelligence”, to enhance the ethical awareness on Artificial Intelligence (AI) and the behavioral awareness of the entire society, to actively guide the responsible AI research, development, and application activities, and to promote healthy development of AI.

Chapter 1. General Principles

1. This set of norms aims to integrate ethics into the entire life cycle of AI, to promote fairness, justice, harmony, safety and security, and to avoid issues such as prejudice, discrimination, privacy and information leakage.

2. This set of norms applies to natural persons, legal persons, and other related organizations engaged in related activities such as management, research and development, supply, and use of AI. (1) The management activities mainly refer to strategic planning, formulation and implementation of policies, laws, regulations, and technical standards, resource allocation, supervision and inspection, etc. (2) The research and development activities mainly refer to scientific research, technology development, product development, etc. related to AI. (3) The supply activities mainly refer to the production, operation, and sales of AI products and services. (4) The use activities mainly refer to the procurement, consumption, and manipulation of AI products and services.

3. Various activities of AI shall abide by the following fundamental ethical norms. (1) Enhancing the well-being of humankind. Adhere to the people-oriented vision, abide by the common values ​​of humankind, respect human rights and the fundamental interests of humankind, and abide by national and regional ethical norms. Adhere to the priority of public interests, promote human-machine harmony, improve people’s livelihood, enhance the sense of happiness, promote the sustainable development of economy, society and ecology, and jointly build a human community with a shared future. (2) Promoting fairness and justice. Adhere to shared benefits and inclusivity, effectively protect the legitimate rights and interests of all relevant stakeholders, promote fair sharing of the benefits of AI in the whole society, and promote social fairness and justice, and equal opportunities. When providing AI products and services, we should fully respect and help vulnerable groups and underrepresented groups, and provide corresponding alternatives as needed. (3) Protecting privacy and security. Fully respect the rights of personal information, to know, and to consent, etc., handle personal information, protect personal privacy and data security in accordance with the principles of lawfulness, justifiability, necessity, and integrity, do no harm to the legitimate rights of personal data, must not illegally collect and use personal information by stealing, tampering, or leaking, etc., and must not infringe on the rights of personal privacy. (4) Ensuring controllability and trustworthiness. Ensure that humans have the full power for decision-making, the rights to choose whether to accept the services provided by AI, the rights to withdraw from the interaction with AI at any time, and the rights to suspend the operation of AI systems at any time, and ensure that AI is always under meaningful human control. (5) Strengthening accountability. Adhere that human beings are the ultimate liable subjects. Clarify the responsibilities of all relevant stakeholders, comprehensively enhance the awareness of responsibility, introspect and self-discipline in the entire life cycle of AI. Establish an accountability mechanism in AI related activities, and do not evade liability reviews and do not escape from responsibilities. (6) Improving ethical literacy. Actively learn and popularize knowledge related to AI ethics, objectively understand ethical issues, and do not underestimate or exaggerate ethical risks. Actively carry out or participate in the discussions on the ethical issues of AI, deeply promote the practice of AI ethics and governance, and improve the ability to respond to related issues.

4. The ethical norms that should be followed in specific activities related to AI include the norms of management, the norms of research and development, the norms of supply, and the norms of use.

Chapter 2. The Norms of Management

5. Promotion of agile governance. Respect the law of development of AI, fully understand the potential and limitations of AI, continue to optimize the governance mechanisms and methods of AI. Do not divorce from reality, do not rush for quick success and instant benefits in the process of strategic decision-making, institution construction, and resource allocation. Promote the healthy and sustainable development of AI in an orderly manner.

6. Active practice. Comply with AI related laws, regulations, policies and standards, actively integrate AI ethics into the entire management process, take the lead in becoming practitioners and promoters of AI ethics and governance, summarize and promote AI governance experiences in a timely manner, and actively respond to the society’s concerns on the ethics of AI.

7. Exercise and use power correctly. Clarify the responsibilities and power boundaries of AI-related management activities, and standardize the conditions and procedures of power operations. Fully respect and protect the privacy, freedom, dignity, safety and other rights of relevant stakeholders and other legal rights and interests, and prohibit improper use of power to infringe the legal rights of natural persons, legal persons and other organizations.

8. Strengthen risk preventions. Enhance bottom-line thinking and risk awareness, strengthen the research and judgment on the potential risks during the development of AI, carry out systematic risk monitoring and evaluations in a timely manner, establish an effective early warning mechanism for risks, and enhance the ability of manage, control, and disposal of ethical risks of AI.

9. Promote inclusivity and openness. Pay full attention to the rights and demands of all stakeholders related to AI, encourage the application of diverse AI technologies to solve practical problems in economic and social development, encourage cross-disciplinary, cross-domain, cross-regional, and cross-border exchanges and cooperation, and promote the formation of AI governance frameworks, standards and norms with broad consensus.

Chapter 3. The Norms of Research and Development

10. Strengthen the awareness of self-discipline. Strengthen self-discipline in activities related to AI research and development, actively integrate AI ethics into every phase of technology research and development, consciously carry out self-censorship, strengthen self-management, and do not engage in AI research and development that violates ethics and morality.

11. Improve data quality. In the phases of data collection, storage, use, processing, transmission, provision, disclosure, etc., strictly abide by data-related laws, standards and norms. Improve the completeness, timeliness, consistency, normativeness and accuracy of data.

12. Enhance safety, security and transparency. In the phases of algorithm design, implementation, and application, etc.,  improve transparency, interpretability, understandability, reliability, and controllability, enhance the resilience, adaptability, and the ability of anti-interference of AI systems, and gradually realize verifiable, auditable, supervisable, traceable, predictable and trustworthy AI.

13. Avoid bias and discrimination. During the process of data collection and algorithm development, strengthen ethics review, fully consider the diversity of demands, avoid potential data and algorithmic bias, and strive to achieve inclusivity, fairness and non-discrimination of AI systems.

Chapter 4. The Norms of Supply

14. Respect market rules. Strictly abide by the various rules and regulations for market access, competition, and trading activities, actively maintain market order, and create a market environment conducive to the development of AI. Data monopoly, platform monopoly, etc. must not be used to disrupt the orderly market competitions, and any means that infringe on the intellectual property rights of other subjects are forbidden.

15. Strengthen quality control. Strengthen the quality monitoring and the evaluations on the use of AI products and services, avoid infringements on personal safety, property safety, user privacy, etc. caused by product defects introduced during the design and development phases, and must not operate, sell, or provide products and services that do not meet the quality standards.

16. Protect the rights and interests of users. Users should be clearly informed that AI technology is used in products and services. The functions and limitations of AI products and services should be clearly identified, and users’ rights to know and to consent should be ensured. Simple and easy-to-understand solutions for users to choose to use or quit the AI mode should be provided, and it is forbidden to set obstacles for users to fairly use AI products and services.

17. Strengthen emergency protection. Emergency mechanisms and loss compensation plans and measures should be investigated and formulated. AI systems need to be timely monitored, user feedbacks should be responded and processed in a timely manner, systemic failures should be prevented in time, and be ready to assist relevant entities to intervene in the AI systems in accordance with laws and regulations to reduce losses and avoid risks.

Chapter 5. The Norms of Use

18. Promote good use. Strengthen the justifications and evaluations of AI products and services before use, fully get aware on the benefits of AI products and services, and fully consider the legitimate rights and interests of various stakeholders, so as to better promote economic prosperity, social progress and sustainable development.

19. Avoid misuse and abuse. Fully get aware and understand the scope of applications and potential negative effects of AI products and services, and earnestly respect the rights of relevant entities not to use AI products or services, avoid improper use, misuse and abuse of AI products and services, and avoid unintended cause of damages to the legitimate rights and interests of others.

20. Forbid malicious use. It is forbidden to use AI products and services that do not comply with laws, regulations, ethical norms, and standards. It is forbidden to use AI products and services to engage in illegal activities. It is strictly forbidden to endanger national security, public safety and production safety, and it is strictly forbidden to do harm to public interests.

21. Timely and Proactive feedback. Actively participate in the practice of AI ethics and governance, prompt feedback to relevant subjects and assistance for solving problems are expected when technical safety and security flaws, policy and law vacuums, and lags of regulation are found in the use of AI products and services.

22. Improve the ability to use. Actively learn AI-related knowledge, and actively master the skills required for various phases related to the use of AI products and services, such as operation, maintenance, and emergency response, so as to ensure the safe and efficient use of them.

Chapter 6. Organization and Implementation

23. This set of norms is issued by the National Governance Committee of New Generation Artificial Intelligence, and it is responsible for explaining and guiding its implementation.

24. With actual requirements and needs, management departments at all levels, enterprises, universities, research institutes, associations and other related organizations may formulate more specific ethical norms and related measures based on this set of norms.

25. This set of norms shall be carried out starting from the date of its publication, and shall be revised in due course according to the needs of economic and social development and the development state of AI.

National Governance Committee for the New Generation Artificial Intelligence, China

September 25th, 2021

Disclaimer: Until the release of this translation (September 27th, 2021), the National Governance Committee for the New Generation Artificial Intelligence has not provided an official translation. This version was produced by Yi Zeng (yi.zeng@ia.ac.cn) from the China-UK Research Centre for AI Ethics and Governance at Institute of Automation, Chinese Academy of Sciences, for general information purposes only. No liability is assumed for the accuracy of the translation. It is always preferable to refer to the original Chinese version at: http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html.

Global Cooperation on Artificial Intelligence is not a Zero-Sum Game

by Yi Zeng

In the past 60 years, scientific research on artificial intelligence (AI) has evolved from simulating human intelligence to an enabling technology that advance human and social development. AI, a technology crucial to global digital advancement, is now the developmental priority of many countries and intergovernmental organizations. In the meantime, the impact of AI on science, technology and policy is profoundly influencing international relations and the global landscape.

However, some countries and intergovernmental organizations are concerned that their leading position in the field of AI will be challenged in an era of global development, or mistakenly assume that the development of AI is a zero-sum game. As a result, recent trends have shown that the scientific and technological exchanges between scholars and the industries in different geolocations are being negatively impacted. Some countries are even being tried to be isolated from others in the field of AI development.

AI promises wide applications in countries across the globe and in all walks of life. It is an important channel that establishes ties between different countries, enabling cultural communication and interactions. For this reason, countries across the globe are working hard on the formulation and implementation of strategies that would promote the development and application of AI related technologies. In recent years, China has steadily increased fundamental R&D capacity, continuously expanding potential applications of AI. This has opened up more opportunities and secured advantages in the international community. In this context, the State Council issued the “New Generation Artificial Intelligence Development Plan,” which laid out the groundwork for China to become a premier AI innovation center of the world by 2030. Yet it should be noted that no country could possibly become the sole center of global AI development. The global AI network of the future will feature a group of major centers for basic research, industrial R&D and application services. These premier research centers will feature high interconnection and intensive collaboration, which will jointly drive forward the global development of AI.

The application of AI will serve all humankind, influencing future social development and human destiny. Like the large quantity of research-based groundbreaking technologies that have influenced the future of humankind, the development of AI is not a zero-sum game. Take brain science as an example. It might still take hundreds of years for scientists to decode the brain’s structures and cognitive functions to reveal the nature of human intelligence. This is why neuroscience researchers across the world have joined hands to develop institutions like the International Brain Initiative (IBI) to coordinate the development of brain research projects in different countries. This organization has enabled cooperation and resource sharing. The Brain Observatories have been instituted, promoting global data sharing of brain research and scientific discoveries. It takes the world to decode the brain. Likewise, and it is even more essential to establish a mechanism for long-term international cooperation on AI to advance the development of humankind.

The principles of AI governance serve as an important bridge and fundamental policy for countries to carry out mutually aided AI development. In the “Governance Principles for the New Generation Artificial Intelligence — Developing Responsible Artificial Intelligence,” China proposed to promote global sustainable development through AI. China will do so by putting forward the principles of harmony and human-friendly, inclusion and sharing, openness and collaboration. These concepts represent China’s vision for the development of next-generation AI, and China’s policy for global AI cooperation.

Inclusive development is based on the recognition of diverse cultures and values, thus necessitating cultural interaction, mutual understanding, increased consensus building and managing differences against the backdrop of diverse cultures. For example, the concept and expression of “harmony” is widely used in Chinese, Japanese, South Korean and African philosophy and cultures. The understanding and development of “harmony” in different cultures enriches and deepens the implications of relevant concepts in modern society. This shared understanding is likely to inspire the harmonious coexistence between human beings and AI in the future. It is a shared goal of humankind to achieve sustainable development, irrespective of differences in values and cultures or regional and political divergences. All parts of the world should seek to realize environmentally sustainable, AI-enabled development of society and the economy. To this end, countries need to build on the basic research of AI to promote exchanges. The human society will benefit from sharing AI evaluations, application scenarios, best practices and the various efforts on technological, social and ethical risks, among others.

The core of fundamental AI research lies on innovations of models and algorithms. These are generally considered to be the strategic high ground of competition among countries. Nevertheless, even in this pivotal field, the adequate sharing of outcomes remains to be the best solution to speed up innovation. Regarding technical AI, challenges based on the same data and tasks, plus regular international symposiums and competitions, are important tools to foster virtuous competition. This allows AI researchers and engineers to inspire each other to come up with innovative ideas for the future, bringing forward better models and algorithms. Open-source algorithms and systems are the most effective tool to maximize the impact of innovation. Only in this way can we employ research and technology communities to push forward the rapid iteration of AI algorithms and systems. As a representative example, the extensive development and application of deep learning models and systems worldwide is driven by relevant open-source communities. These directly benefit creators of said algorithms and systems, bringing about the honor and recognition they deserve. As the history of technological development shows, community contributors from countries all over the world will be able to identify and repair loopholes and safety hazards lurking in algorithms immediately through open-source code and models. Open ecosystems are gradually taking shape in the field of AI, fueling the fast development of AI technologies. Through collaboration, AI innovators, policymakers and governments across the globe are obliged to carry forward and maintain these mechanisms and platforms for virtuous competition and development.

The current development of AI is still highly uncertain. Countries need to jointly cope with the technical safety, security and ethical issues arising out of its development. Otherwise, many regions will fall victim to the negative impacts of technological advancements, regardless of previous lessons learned in other countries and regions. For example, some middle schools and universities in China adopted an application of AI-enabled automatic facial expression recognition in classrooms. However, this infringed on the privacy of students, negatively impacting the interactions between students and teachers. Eventually, various education departments and members of the public spoke out against the system, resulting in its suspension. Unfortunately, these relevant experiences and responses were underappreciated by other countries. The United States and some European countries let similar cases repeat themselves in the following months. That being said, AI should not be completely banned in schools. The critical issue is whether we can protect the privacy of individuals while simultaneously employing technology to benefit all. For example, Japan has set a good precedent for other countries, as some Japanese primary and middle schools use AI to prevent school bullying. In addition, there are both long-standing technological and cultural risks lurking in different paths to realize AI. For example, AI is primarily considered to be a tool in the West, but Japan regards the technology as a companion and quasi-member of the society. In most science fictions, AI is often depicted as the common enemy of humankind. In the long run, different visions for the relationships between AI and humankind will generate different types of risks in the course of technological development. As such, it is of vital importance to enter into global collaboration to circumvent such risks, jointly examine long-term issues and forestall risks. On top of that, we should make efforts to prevent vicious competition, misuse, abuse and potential evil use and applications of AI against great uncertainties during the technological development.

China is dedicated to becoming one of the world’s premier AI innovation centers. On this account, we need to first clarify two specific considerations. To begin with, every center has different developmental concepts, advantages and priorities. Therefore, it is critical that they learn from each other. The US and United Kingdom boast prominent advantages with regards to being the original innovators of the basic theories of AI. European countries have generally laid a solid foundation of AI ethics and governance. Japan has secured outstanding achievements in the relationships between human beings and AI development. China, as one of the leading centers of AI development, should assume the responsibility for empowering the development of AI in less-advantaged countries, especially those that are still developing, to deliver the benefits of AI to every corner of the world. Other countries may draw on these insights as well.

Both Chinese and British scholars have observed that despite differences in values, principles or other unsolved divergences, cross-cultural cooperation on AI it is still effective as long as a consensus is reached on the handling of specific issues. The philosophy of seeking harmony without uniformity and providing mutual support through tough times provides communication channels for technological development, regardless of cultural and historical differences.

A global, interconnected landscape of AI development featuring multiple main centers is bound to emerge in the future. To promote this vision, countries should engage in cooperation to strengthen mutual trust, thus avoiding misunderstandings. In order to realize the global sustainable development of humankind, society, the environment and technology, we should give full expression to the positive role of highly inclusive international organizations such as the United Nations. To realize the global development of AI technology and its applications to serve the well-being of humankind and a better future for all of us, we must stand and hold together as a human community with a shared future.

About the Author

Yi Zeng

Professor, Institute of Automation, Chinese Academy of Sciences.

Director of Research Center for AI Ethics and Sustainable Development, Beijing Academy of Artificial Intelligence.

Chief Scientist, Institute for AI International Governance, Tsinghua University.

Adjunct Research Fellow, Center for International Security and Strategy, Tsinghua University.

A shortened Chinese version of this opinion piece is published in Guangming Daily, Jan 20th, 2021.

https://epaper.gmw.cn/gmrb/html/2021-01/20/nw.D110000gmrb_20210120_3-02.htm

Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance

Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. In a paper titled “Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance”, which is a joint work of members from China-UK Research Centre for AI Ethics And Governance, University of Cambridge (UK), Beijing Academy of Artificial Intelligence (China), Chinese Academy of Sciences (China), Peking University (China), focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact on the development of AI ethics and governance. We suggest that there is reason to be optimistic about achieving greater cross-cultural cooperation on AI ethics and governance. We argue that misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed. Even where fundamental differences exist, these may not necessarily prevent productive cross-cultural cooperation, for two reasons: (1) cooperation does not require achieving agreement on principles and standards for all areas of AI; and (2) it is sometimes possible to reach agreement on practical issues despite disagreement on more abstract values or principles. We believe that academia has a key role to play in promoting cross-cultural cooperation on AI ethics and governance, by building greater mutual understanding, and clarifying where different forms of agreement will be both necessary and possible. We make a number of recommendations for practical steps and initiatives, including translation and multilingual publication of key documents, researcher exchange programmes, and development of research agendas on cross-cultural topics. The authors would like to thank the participants of the July 11–12 Cross-cultural trust for beneficial AI workshop for valuable discussions relevant to the themes of the paper, as well as Emma Bates, Haydn Belfield, Martina Kunz, Amritha Jayanti, Luke Kemp, Onora O’Neill, and two anonymous manuscript reviewers for helpful comments on previous drafts of this paper.

The paper is available at: https://link.springer.com/article/10.1007/s13347-020-00402-x

Facial Recognition and Public Health Management Survey

Facial recognition technology plays an active role in different situations such as national and social security and daily life, but it also brings hidden dangers and challenges to privacy, security and so on. During public health-related events such as COVID-19, automatic detection techniques such as facial recognition play an active role in prevention and control. Applications involving personal information and privacy should be with legal and agile governance. The survey hopes to solicit comments and suggestions on the application of facial recognition in daily and public health events.

This survey is for scientific research purposes only. The results will be published as white papers and/or open access articles. No personally identifiable information will be collected in this survey.

This survey is a collaborative work among Beijing Academy of Artificial Intelligence, China-UK Research Centre for AI Ethics and Governance, and Schwarzman College of Tsinghua University.

We are grateful to hear your voice at https://forms.gle/9U5Lc6krFrpE9Ubr9

The Official Launch of ChinUK Centre for AI Ethics and Governance

On November 4th, 2019, a new bilateral China-UK Research Centre for AI Ethics and Governance (“ChinUK”) was officially launched. Led by Professor Yi Zeng of the Institute of Automation (CASIA) at the Chinese Academy of Sciences, and Professor Huw Price of LCFI, ChinUK aims to support exchanges and collaboration between scholars in China and the UK, in the new interdisciplinary field studying the ethics and impacts of AI.

Professor Price exchanged copies of a new MoU with Professor Cheng-Lin Liu, Vice-President of CASIA, at the ChinUK inauguration ceremony in Beijing on Monday 4 November. LCFI researchers Martina Kunz, Yang Liu and Danit Gal, as well as Huw Price, presented talks at the first ChinUK joint workshop. 

Huw Price China 2019

Professor Zeng said: “ChinUK is a cross-cultural and transdisciplinary centre on AI Ethics and Governance. Its mission is to build bridges between Eastern and Western perspectives on AI, linking scholars in China and UK for sharing, interaction, and complementary research. In this way the new CASIA-LCFI partnership will help to promote the development of AI for humanity, ecology, and social good, and for a shared future of Human-AI Symbiosis.”

Professor Price said: “We are delighted to be linked to CASIA in this way. Collaborations between China and the West will be vital in ensuring that AI Is safe and beneficial. China will be a leader in the development of AI, and its voice will be just as important in global conversations about the impacts of this technology. ChinUK will support these conversations.”

Huw Price China 2019 1

Linking Artificial Intelligence Principles

Various Artificial Intelligence Principles are designed with different considerations, and none of them can be perfect and complete for every scenario. Linking Artificial Intelligence Principles (LAIP) is an initiative and platform for synthesizing, linking, and analyzing various Artificial Intelligence Principles World Wide, from different research institutes, non-profit organizations, non-governmental organizations, companies, etc. The efforts aim at understanding in which degree do these different AI Principles proposals share common values, differ and complete each other.

The current LAIP engine enables the users to list and compare between different AI principle proposals at keywords, topic and paragraph levels.

The Platform can be accessed from HERE.

Here we have the paper “Linking Artificial Intelligence Principles” that gives more details on the design philosophy and initial observations.

Establishing a China-UK Initiative for AI Ethics and Governance

The China-UK Research Centre for AI Ethics and Governance (The ChinUK Centre, for short) is a cross cultural and transdisciplinary Centre. Through linking Eastern and Western wisdom, the Centre is aimed at Bridging efforts from China and UK to share, interact, and complement efforts about AI Ethics and Governance to ensure AI develops for humanity and social good. The Centre is hosted at Innovation Academy of Artificial Intelligence, Chinese Academy of Sciences.

The characteristic contribution of the center will be complementary, cross cultural, social and technical investigation on AI Risks, Safety, Ethics, and Governance. China and UK are representatives of Eastern and Western culture and society respectively, and will face not only similar, but also very diverse challenges in the process of AI development and applications in different society. We are very excited to see how the social and technological efforts from two countries can be shared and through interaction, be bridged together for a more global vision, and finally achieve the same goal for building beneficial AI for humanity and society. The recent plan for concrete research of this centre are listed but not limited to the following:
1. Linking, Synthesizing, and Analyzing Global Ethical Principles of Artificial Intelligence, Robotics, and Autonomous Systems.
2. Building Technical Models for AI with Low risk, High Safety and Ethical AI.