Unknown Environment
Research on unknown environment navigation focuses on enabling robots and autonomous agents to effectively explore, plan paths, and complete tasks in unmapped or partially-mapped spaces. Current efforts concentrate on developing robust algorithms, such as reinforcement learning (including actor-critic and hierarchical approaches), control barrier functions for safety, and efficient mapping techniques (e.g., using NeRFs and probabilistic information gain), often incorporating multi-agent coordination and communication strategies. This research is crucial for advancing robotics in various fields, including search and rescue, exploration, and autonomous driving, by enabling safer and more efficient operation in complex and unpredictable environments.
Papers
Catch Me If You Hear Me: Audio-Visual Navigation in Complex Unmapped Environments with Moving Sounds
Abdelrahman Younes, Daniel Honerkamp, Tim Welschehold, Abhinav Valada
Frontier-led Swarming: Robust Multi-Robot Coverage of Unknown Environments
Vu Phi Tran, Matthew A. Garratt, Kathryn Kasmarik, Sreenatha G. Anavatti