Social Robot Navigation
Social robot navigation focuses on enabling robots to move safely and smoothly through environments shared with humans, adhering to social norms and avoiding collisions. Current research emphasizes the integration of deep reinforcement learning (DRL) and vision-language models (VLMs), often incorporating techniques like Monte Carlo Tree Search and transformer architectures, to improve trajectory planning and decision-making in complex, dynamic scenarios. This field is crucial for advancing human-robot interaction and enabling the deployment of robots in real-world settings like hospitals, offices, and public spaces, where socially compliant navigation is paramount. The development of large-scale, multimodal datasets and standardized evaluation metrics is also a key focus to ensure robust and reliable performance.
Papers
SoRTS: Learned Tree Search for Long Horizon Social Robot Navigation
Ingrid Navarro, Jay Patrikar, Joao P. A. Dantas, Rohan Baijal, Ian Higgins, Sebastian Scherer, Jean Oh
A Study on Learning Social Robot Navigation with Multimodal Perception
Bhabaranjan Panigrahi, Amir Hossain Raj, Mohammad Nazeri, Xuesu Xiao