Robot Navigation
Robot navigation research focuses on enabling robots to move safely and efficiently through various environments, often guided by human instructions or pre-defined goals. Current efforts concentrate on improving robustness and adaptability through techniques like integrating vision-language models (VLMs) for semantic understanding, employing reinforcement learning (RL) for dynamic environments, and developing hierarchical planning methods to handle complex, long-horizon tasks. These advancements are crucial for deploying robots in real-world settings, such as healthcare, logistics, and exploration, where safe and efficient navigation is paramount.
Papers
Robot Navigation Using Physically Grounded Vision-Language Models in Outdoor Environments
Mohamed Elnoor, Kasun Weerakoon, Gershom Seneviratne, Ruiqi Xian, Tianrui Guan, Mohamed Khalid M Jaffar, Vignesh Rajagopal, Dinesh Manocha
Resolving Positional Ambiguity in Dialogues by Vision-Language Models for Robot Navigation
Kuan-Lin Chen, Tzu-Ti Wei, Li-Tzu Yeh, Elaine Kao, Yu-Chee Tseng, Jen-Jee Chen
WildFusion: Multimodal Implicit 3D Reconstructions in the Wild
Yanbaihui Liu, Boyuan Chen