Navigation Skill
Navigation skill in robotics and AI focuses on enabling agents to efficiently and effectively reach goals within various environments, guided by instructions or learned objectives. Current research emphasizes developing robust and adaptable navigation systems using reinforcement learning, large language models (LLMs), and biologically-inspired approaches like active inference, often incorporating multimodal data fusion (e.g., vision and language) and advanced planning techniques. These advancements are crucial for improving autonomous systems in diverse applications, from mobile robots and autonomous vehicles to virtual agents, and contribute to a deeper understanding of both artificial and biological navigation strategies.
Papers
UNMuTe: Unifying Navigation and Multimodal Dialogue-like Text Generation
Niyati Rawal, Roberto Bigazzi, Lorenzo Baraldi, Rita Cucchiara
Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions
Qingbin Zeng, Qinglong Yang, Shunan Dong, Heming Du, Liang Zheng, Fengli Xu, Yong Li