
Robots that learn to fear for better risk assessment
The instinct to freeze in the face of uncertain conditions, duck at a sudden noise, or run away in dangerous situations is a quick reaction that is critical to the survival of living things in unfamiliar environments.
One of the most recent lines of research in robotics is exploring how to translate emotional response mechanisms typical of humans, particularly fear, into robots in order to improve their ability to assess risks and avoid dangerous situations.
A study published in the scientific journal IEEE Robotics and Automation Letters by Professor Alessandro Rizzo - from PoliTO Department of Electronics and Telecommunications-DET - notes a significant step forward in this field. The research was taken up in IEEE Spectrum - the online magazine of IEEE, the world's largest professional organization dedicated to engineering and applied science - in the IEEE Journal Watch series dedicated to engineering and computing in collaboration with IEEE Xplore.
Professor Rizzo's team pointed out that robotic control systems currently in use are often designed for very specific tasks, a feature that makes them ineffective under complex and dynamic conditions. “As a result, robots may struggle to operate effectively in complex and changing conditions,” Rizzo explained. Unlike humans, who respond to diverse, complex and unique stimuli, robots struggle to adapt effectively.
A popular theory of how the human brain works in fearful situations, called the “dual-pathway hypothesis”, describes two main neural pathways that allow us to respond to risks. The first is the “low road”, a short pathway that produces a rapid response driven by the amygdala - the brain structure that drives emotions - and allows decision-making based on raw data, as in the case of immediate defensive action. The second is the “high road,” a pathway that integrates more articulated reasoning and experience, with mediation from the prefrontal cortex, the region of the brain responsible for higher cognitive functions, such as planning.
Professor Rizzo and DET PhD student Andrea Usai designed a robot controller that emulates the fear response via the “low road”. Fear was chosen because it is one of the most studied emotions in neuroscience and plays a critical role in self-preservation and rapid responses to danger.
The control algorithm developed at PoliTO combines predictive control (MPC) with so-called reinforcement learning. The former allows the robot to be controlled in real time, ensuring compliance with the constraints imposed by the task to be performed. The second allows the robot to dynamically adapt its priorities based on raw data from the environment, thus influencing the overall behavior of the system in a flexible and intelligent way.
The simulations conducted by Rizzo and Usai showed that the robot guided by "low road" can move in unfamiliar environments by following a smoother and safer path towards its target than conventional robotic designs. For example, in a scenario with different types of dynamical hazards, the "low road" robot maintained a safe distance of about 3.1 meters from the dangerous objects, while two other conventional robots tested came closer, up to 30 and 80 cm, respectively.
The “low road” approach looks promising in multiple scenarios, including object handling, surveillance and rescue operations, where robots face dangerous conditions and may need to adopt more cautious behaviors. However, Usai points out that the "low road" approach is very reactive and more suitable for quick short-term decisions. Therefore, the research team is now working on a control design that mimics the “long road,” which, while complementary to the short road, could help robots make more “rational” and long-term decisions by evaluating different scenarios.
For the future, researchers consider using multimodal language models (such as ChatGPT) to simulate some of the fundamental functions of the human prefrontal cortex, such as decision making, strategic planning and context evaluation. As Rizzo explains, “These models could help simulate some of the core functions of the human prefrontal cortex, such as decision-making, strategic planning, and context evaluation, allowing us to emulate more cognitively driven responses in robots”. “Looking ahead, it would also be interesting trying to extend the architecture to incorporate multiple emotions,” Rizzo adds, “enabling a richer and more nuanced form of adaptive behavior in robotic systems.”