Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. In a paper titled “Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance”, which is a joint work of members from China-UK Research Centre for AI Ethics And Governance, University of Cambridge (UK), Beijing Academy of Artificial Intelligence (China), Chinese Academy of Sciences (China), Peking University (China), focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact on the development of AI ethics and governance. We suggest that there is reason to be optimistic about achieving greater cross-cultural cooperation on AI ethics and governance. We argue that misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed. Even where fundamental differences exist, these may not necessarily prevent productive cross-cultural cooperation, for two reasons: (1) cooperation does not require achieving agreement on principles and standards for all areas of AI; and (2) it is sometimes possible to reach agreement on practical issues despite disagreement on more abstract values or principles. We believe that academia has a key role to play in promoting cross-cultural cooperation on AI ethics and governance, by building greater mutual understanding, and clarifying where different forms of agreement will be both necessary and possible. We make a number of recommendations for practical steps and initiatives, including translation and multilingual publication of key documents, researcher exchange programmes, and development of research agendas on cross-cultural topics. The authors would like to thank the participants of the July 11–12 Cross-cultural trust for beneficial AI workshop for valuable discussions relevant to the themes of the paper, as well as Emma Bates, Haydn Belfield, Martina Kunz, Amritha Jayanti, Luke Kemp, Onora O’Neill, and two anonymous manuscript reviewers for helpful comments on previous drafts of this paper.
Facial recognition technology plays an active role in different situations such as national and social security and daily life, but it also brings hidden dangers and challenges to privacy, security and so on. During public health-related events such as COVID-19, automatic detection techniques such as facial recognition play an active role in prevention and control. Applications involving personal information and privacy should be with legal and agile governance. The survey hopes to solicit comments and suggestions on the application of facial recognition in daily and public health events.
This survey is for scientific research purposes only. The results will be published as white papers and/or open access articles. No personally identifiable information will be collected in this survey.
This survey is a collaborative work among Beijing Academy of Artificial Intelligence, China-UK Research Centre for AI Ethics and Governance, and Schwarzman College of Tsinghua University.
On November 4th, 2019, a new bilateral China-UK Research Centre for AI Ethics and Governance (“ChinUK”) was officially launched. Led by Professor Yi Zeng of the Institute of Automation (CASIA) at the Chinese Academy of Sciences, and Professor Huw Price of LCFI, ChinUK aims to support exchanges and collaboration between scholars in China and the UK, in the new interdisciplinary field studying the ethics and impacts of AI.
Professor Price exchanged copies of a new MoU with Professor Cheng-Lin Liu, Vice-President of CASIA, at the ChinUK inauguration ceremony in Beijing on Monday 4 November. LCFI researchers Martina Kunz, Yang Liu and Danit Gal, as well as Huw Price, presented talks at the first ChinUK joint workshop.
Professor Zeng said: “ChinUK is a cross-cultural and transdisciplinary centre on AI Ethics and Governance. Its mission is to build bridges between Eastern and Western perspectives on AI, linking scholars in China and UK for sharing, interaction, and complementary research. In this way the new CASIA-LCFI partnership will help to promote the development of AI for humanity, ecology, and social good, and for a shared future of Human-AI Symbiosis.”
Professor Price said: “We are delighted to be linked to CASIA in this way. Collaborations between China and the West will be vital in ensuring that AI Is safe and beneficial. China will be a leader in the development of AI, and its voice will be just as important in global conversations about the impacts of this technology. ChinUK will support these conversations.”
Various Artificial Intelligence Principles are designed with different considerations, and none of them can be perfect and complete for every scenario. Linking Artificial Intelligence Principles (LAIP) is an initiative and platform for synthesizing, linking, and analyzing various Artificial Intelligence Principles World Wide, from different research institutes, non-profit organizations, non-governmental organizations, companies, etc. The efforts aim at understanding in which degree do these different AI Principles proposals share common values, differ and complete each other.
The current LAIP engine enables the users to list and compare between different AI principle proposals at keywords, topic and paragraph levels.
The China-UK Research Centre for AI Ethics and Governance (The ChinUK Centre, for short) is a cross cultural and transdisciplinary Centre. Through linking Eastern and Western wisdom, the Centre is aimed at Bridging efforts from China and UK to share, interact, and complement efforts about AI Ethics and Governance to ensure AI develops for humanity and social good. The Centre is hosted at Innovation Academy of Artificial Intelligence, Chinese Academy of Sciences.
The characteristic contribution of the center will be complementary, cross cultural, social and technical investigation on AI Risks, Safety, Ethics, and Governance. China and UK are representatives of Eastern and Western culture and society respectively, and will face not only similar, but also very diverse challenges in the process of AI development and applications in different society. We are very excited to see how the social and technological efforts from two countries can be shared and through interaction, be bridged together for a more global vision, and finally achieve the same goal for building beneficial AI for humanity and society. The recent plan for concrete research of this centre are listed but not limited to the following: 1. Linking, Synthesizing, and Analyzing Global Ethical Principles of Artificial Intelligence, Robotics, and Autonomous Systems. 2. Building Technical Models for AI with Low risk, High Safety and Ethical AI.