Robot Navigation
Robot navigation research focuses on enabling robots to move safely and efficiently through various environments, often guided by human instructions or pre-defined goals. Current efforts concentrate on improving robustness and adaptability through techniques like integrating vision-language models (VLMs) for semantic understanding, employing reinforcement learning (RL) for dynamic environments, and developing hierarchical planning methods to handle complex, long-horizon tasks. These advancements are crucial for deploying robots in real-world settings, such as healthcare, logistics, and exploration, where safe and efficient navigation is paramount.
Papers
IR-MCL: Implicit Representation-Based Online Global Localization
Haofei Kuang, Xieyuanli Chen, Tiziano Guadagnino, Nicky Zimmerman, Jens Behley, Cyrill Stachniss
Cloud Hopping; Navigating in 3D Uneven Environments via Supervoxels and Control Lyapunov Function
Fetullah Atas, Grzegorz Cielniak, Lars Grimstad