Dynamic Environment
Dynamic environment research focuses on enabling robots and autonomous systems to effectively navigate and operate in unpredictable, changing surroundings. Current research emphasizes robust perception and planning algorithms, often incorporating deep reinforcement learning, model predictive control, and advanced mapping techniques like implicit neural representations and mesh-based methods, to handle moving obstacles and uncertain conditions. These advancements are crucial for improving the safety and efficiency of robots in diverse applications such as autonomous driving, aerial robotics, and collaborative human-robot interaction, ultimately leading to more reliable and adaptable autonomous systems.
Papers
Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic Environments
Ben Burgess-Limerick, Jesse Haviland, Chris Lehnert, Peter Corke
Visual Forecasting as a Mid-level Representation for Avoidance
Hsuan-Kung Yang, Tsung-Chih Chiang, Ting-Ru Liu, Chun-Wei Huang, Jou-Min Liu, Chun-Yi Lee
Heuristic-based Incremental Probabilistic Roadmap for Efficient UAV Exploration in Dynamic Environments
Zhefan Xu, Christopher Suzuki, Xiaoyang Zhan, Kenji Shimada