Obstacle Avoidance
Obstacle avoidance research focuses on enabling robots and autonomous systems to safely navigate complex environments by generating collision-free trajectories. Current efforts concentrate on developing robust control strategies, often employing model predictive control (MPC), control barrier functions (CBFs), and deep reinforcement learning (DRL), sometimes integrated with advanced perception techniques like ray tracing and sensor fusion. These advancements are crucial for improving the safety and efficiency of autonomous systems in various applications, from warehouse logistics and industrial automation to assistive robotics and aerospace.
Papers
Sampling-based Safe Reinforcement Learning for Nonlinear Dynamical Systems
Wesley A. Suttle, Vipul K. Sharma, Krishna C. Kosaraju, S. Sivaranjani, Ji Liu, Vijay Gupta, Brian M. Sadler
Dexterous Legged Locomotion in Confined 3D Spaces with Reinforcement Learning
Zifan Xu, Amir Hossain Raj, Xuesu Xiao, Peter Stone