Local Navigation
Local navigation research focuses on enabling robots and autonomous systems to efficiently and safely navigate complex environments, achieving goals specified through various means, including language or visual cues. Current efforts concentrate on improving perception (e.g., using multi-sensor fusion, 3D reconstruction, and vision-language models) and planning (e.g., employing reinforcement learning, model predictive control, and A* search algorithms) in dynamic and cluttered settings. These advancements are crucial for applications ranging from autonomous vehicles and drones to assistive technologies for visually impaired individuals, impacting both robotics and accessibility.
Papers
RoomTour3D: Geometry-Aware Video-Instruction Tuning for Embodied Navigation
Mingfei Han, Liang Ma, Kamila Zhumakhanova, Ekaterina Radionova, Jingyi Zhang, Xiaojun Chang, Xiaodan Liang, Ivan Laptev
DTAA: A Detect, Track and Avoid Architecture for navigation in spaces with Multiple Velocity Objects
Samuel Nordström, Björn Lindquist, George Nikolakopoulos
LiveNet: Robust, Minimally Invasive Multi-Robot Control for Safe and Live Navigation in Constrained Environments
Srikar Gouru, Siddharth Lakkoju, Rohan Chandra
NaVILA: Legged Robot Vision-Language-Action Model for Navigation
An-Chieh Cheng, Yandong Ji, Zhaojing Yang, Xueyan Zou, Jan Kautz, Erdem Bıyık, Hongxu Yin, Sifei Liu, Xiaolong Wang
MOANA: Multi-Radar Dataset for Maritime Odometry and Autonomous Navigation Application
Hyesu Jang, Wooseong Yang, Hanguen Kim, Dongje Lee, Yongjin Kim, Jinbum Park, Minsoo Jeon, Jaeseong Koh, Yejin Kang, Minwoo Jung, Sangwoo Jung, Ayoung Kim
VLN-Game: Vision-Language Equilibrium Search for Zero-Shot Semantic Navigation
Bangguo Yu, Yuzhen Liu, Lei Han, Hamidreza Kasaei, Tingguang Li, Ming Cao
InstruGen: Automatic Instruction Generation for Vision-and-Language Navigation Via Large Multimodal Models
Yu Yan, Rongtao Xu, Jiazhao Zhang, Peiyang Li, Xiaodan Liang, Jianqin Yin