Dynamic Environment
Dynamic environment research focuses on enabling robots and autonomous systems to effectively navigate and operate in unpredictable, changing surroundings. Current research emphasizes robust perception and planning algorithms, often incorporating deep reinforcement learning, model predictive control, and advanced mapping techniques like implicit neural representations and mesh-based methods, to handle moving obstacles and uncertain conditions. These advancements are crucial for improving the safety and efficiency of robots in diverse applications such as autonomous driving, aerial robotics, and collaborative human-robot interaction, ultimately leading to more reliable and adaptable autonomous systems.
Papers
D2SLAM: Semantic visual SLAM based on the Depth-related influence on object interactions for Dynamic environments
Ayman Beghdadi, Malik Mallem, Lotfi Beji
Learning-based Motion Planning in Dynamic Environments Using GNNs and Temporal Encoding
Ruipeng Zhang, Chenning Yu, Jingkai Chen, Chuchu Fan, Sicun Gao
Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments
Mingyo Seo, Ryan Gupta, Yifeng Zhu, Alexy Skoutnev, Luis Sentis, Yuke Zhu
Efficient Speed Planning for Autonomous Driving in Dynamic Environment with Interaction Point Model
Yingbing Chen, Ren Xin, Jie Cheng, Qingwen Zhang, Xiaodong Mei, Ming Liu, Lujia Wang