Dynamic Environment
Dynamic environment research focuses on enabling robots and autonomous systems to effectively navigate and operate in unpredictable, changing surroundings. Current research emphasizes robust perception and planning algorithms, often incorporating deep reinforcement learning, model predictive control, and advanced mapping techniques like implicit neural representations and mesh-based methods, to handle moving obstacles and uncertain conditions. These advancements are crucial for improving the safety and efficiency of robots in diverse applications such as autonomous driving, aerial robotics, and collaborative human-robot interaction, ultimately leading to more reliable and adaptable autonomous systems.
Papers
Logic Learning from Demonstrations for Multi-step Manipulation Tasks in Dynamic Environments
Yan Zhang, Teng Xue, Amirreza Razmjoo, Sylvain Calinon
Empirical Analysis of the Dynamic Binary Value Problem with IOHprofiler
Diederick Vermetten, Johannes Lengler, Dimitri Rusin, Thomas Bäck, Carola Doerr
Decentralized Multi-Agent Trajectory Planning in Dynamic Environments with Spatiotemporal Occupancy Grid Maps
Siyuan Wu, Gang Chen, Moji Shi, Javier Alonso-Mora
Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination
Saad Abdul Ghani, Zizhao Wang, Peter Stone, Xuesu Xiao
An LLM-Based Digital Twin for Optimizing Human-in-the Loop Systems
Hanqing Yang, Marie Siew, Carlee Joe-Wong
Trajectory Planning of Robotic Manipulator in Dynamic Environment Exploiting DRL
Osama Ahmad, Zawar Hussain, Hammad Naeem