Robot Navigation
Robot navigation research focuses on enabling robots to move safely and efficiently through various environments, often guided by human instructions or pre-defined goals. Current efforts concentrate on improving robustness and adaptability through techniques like integrating vision-language models (VLMs) for semantic understanding, employing reinforcement learning (RL) for dynamic environments, and developing hierarchical planning methods to handle complex, long-horizon tasks. These advancements are crucial for deploying robots in real-world settings, such as healthcare, logistics, and exploration, where safe and efficient navigation is paramount.
Papers
BehAV: Behavioral Rule Guided Autonomy Using VLMs for Robot Navigation in Outdoor Scenes
Kasun Weerakoon, Mohamed Elnoor, Gershom Seneviratne, Vignesh Rajagopal, Senthil Hariharan Arul, Jing Liang, Mohamed Khalid M Jaffar, Dinesh Manocha
Initialization of Monocular Visual Navigation for Autonomous Agents Using Modified Structure from Small Motion
Juan-Diego Florez, Mehregan Dor, Panagiotis Tsiotras
Key-Scan-Based Mobile Robot Navigation: Integrated Mapping, Planning, and Control using Graphs of Scan Regions
Dharshan Bashkaran Latha, Ömür Arslan
ReMEmbR: Building and Reasoning Over Long-Horizon Spatio-Temporal Memory for Robot Navigation
Abrar Anwar, John Welsh, Joydeep Biswas, Soha Pouya, Yan Chang
Hey Robot! Personalizing Robot Navigation through Model Predictive Control with a Large Language Model
Diego Martinez-Baselga, Oscar de Groot, Luzia Knoedler, Javier Alonso-Mora, Luis Riazuelo, Luis Montano
Learning a Terrain- and Robot-Aware Dynamics Model for Autonomous Mobile Robot Navigation
Jan Achterhold, Suresh Guttikonda, Jens U. Kreber, Haolong Li, Joerg Stueckler
DIGIMON: Diagnosis and Mitigation of Sampling Skew for Reinforcement Learning based Meta-Planner in Robot Navigation
Shiwei Feng, Xuan Chen, Zhiyuan Cheng, Zikang Xiong, Yifei Gao, Siyuan Cheng, Sayali Kate, Xiangyu Zhang