Dynamic Environment
Dynamic environment research focuses on enabling robots and autonomous systems to effectively navigate and operate in unpredictable, changing surroundings. Current research emphasizes robust perception and planning algorithms, often incorporating deep reinforcement learning, model predictive control, and advanced mapping techniques like implicit neural representations and mesh-based methods, to handle moving obstacles and uncertain conditions. These advancements are crucial for improving the safety and efficiency of robots in diverse applications such as autonomous driving, aerial robotics, and collaborative human-robot interaction, ultimately leading to more reliable and adaptable autonomous systems.
Papers
Human-Following and -guiding in Crowded Environments using Semantic Deep-Reinforcement-Learning for Mobile Service Robots
Linh Kästner, Bassel Fatloun, Zhengcheng Shen, Daniel Gawrisch, Jens Lambrecht
Arena-Bench: A Benchmarking Suite for Obstacle Avoidance Approaches in Highly Dynamic Environments
Linh Kästner, Teham Bhuiyan, Tuan Anh Le, Elias Treis, Johannes Cox, Boris Meinardus, Jacek Kmiecik, Reyk Carstens, Duc Pichel, Bassel Fatloun, Niloufar Khorsandi, Jens Lambrecht