Site icon International Research Center for AI Ethics and Governance

AI with Cognitive Empathy for Reducing Safety Risks of Others

Frontiers in Neuromorphic Engineering recently published a paper from Prof. Yi Zeng’s interdisciplinary team on Brain-inspired AI and AI Ethics, titled “A Brain-Inspired Theory of Mind Spiking Neural Network for Reducing Safety Risks of Other Agents”. The main objective of this paper is to introduce Theory of Mind (or Cognitive Empathy) to Brain-inspired AI models for reducing safety risks of other agents, namely, to help other agents avoid danger in an active way. This is an effort towards achieving Brain-inspired Ethical AI.

The paper can be accessed here:

Here is the abstract of the paper:

Artificial Intelligence (AI) systems are increasingly applied to complex tasks that involve interaction with multiple agents. Such interaction-based systems can lead to safety risks. Due to limited perception and prior knowledge, agents acting in the real world may unconsciously hold false beliefs and strategies about their environment, leading to safety risks in their future decisions. For humans, we can usually rely on the high-level theory of mind (ToM) capability to perceive the mental states of others, identify risk-inducing errors, and offer our timely help to keep others away from dangerous situations. Inspired by the biological information processing mechanism of ToM, we propose a brain-inspired theory of mind spiking neural network (ToM-SNN) model to enable agents to perceive such risk-inducing errors inside others’ mental states and make decisions to help others when necessary. The ToM-SNN model incorporates the multiple brain areas coordination mechanisms and biologically realistic spiking neural networks (SNNs) trained with Reward-modulated Spike-Timing-Dependent Plasticity (R-STDP). To verify the effectiveness of the ToM-SNN model, we conducted various experiments in the gridworld environments with random agents’ starting positions and random blocking walls. Experimental results demonstrate that the agent with the ToM-SNN model selects rescue behavior to help others avoid safety risks based on self-experience and prior knowledge. To the best of our knowledge, this study provides a new perspective to explore how agents help others avoid potential risks based on bio-inspired ToM mechanisms and may contribute more inspiration toward better research on safety risks.

Exit mobile version