Dynamic Environment
Dynamic environment research focuses on enabling robots and autonomous systems to effectively navigate and operate in unpredictable, changing surroundings. Current research emphasizes robust perception and planning algorithms, often incorporating deep reinforcement learning, model predictive control, and advanced mapping techniques like implicit neural representations and mesh-based methods, to handle moving obstacles and uncertain conditions. These advancements are crucial for improving the safety and efficiency of robots in diverse applications such as autonomous driving, aerial robotics, and collaborative human-robot interaction, ultimately leading to more reliable and adaptable autonomous systems.
Papers
Learning Dynamics of a Ball with Differentiable Factor Graph and Roto-Translational Invariant Representations
Qingyu Xiao, Zixuan Wu, Matthew Gombolay
SPIBOT: A Drone-Tethered Mobile Gripper for Robust Aerial Object Retrieval in Dynamic Environments
Gyuree Kang, Ozan Güneş, Seungwook Lee, Maulana Bisyir Azhari, David Hyunchul Shim
NavRL: Learning Safe Flight in Dynamic Environments
Zhefan Xu, Xinming Han, Haoyu Shen, Hanyu Jin, Kenji Shimada
Intent Prediction-Driven Model Predictive Control for UAV Planning and Navigation in Dynamic Environments
Zhefan Xu, Hanyu Jin, Xinming Han, Haoyu Shen, Kenji Shimada