Obstacle Avoidance
Obstacle avoidance research focuses on enabling robots and autonomous systems to safely navigate complex environments by generating collision-free trajectories. Current efforts concentrate on developing robust control strategies, often employing model predictive control (MPC), control barrier functions (CBFs), and deep reinforcement learning (DRL), sometimes integrated with advanced perception techniques like ray tracing and sensor fusion. These advancements are crucial for improving the safety and efficiency of autonomous systems in various applications, from warehouse logistics and industrial automation to assistive robotics and aerospace.
Papers
Planning the path with Reinforcement Learning: Optimal Robot Motion Planning in RoboCup Small Size League Environments
Mateus G. Machado, João G. Melo, Cleber Zanchettin, Pedro H. M. Braga, Pedro V. Cunha, Edna N. S. Barros, Hansenclever F. Bassani
TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation
Junli Ren, Yikai Liu, Yingru Dai, Junfeng Long, Guijin Wang
Evaluating Dynamic Environment Difficulty for Obstacle Avoidance Benchmarking
Moji Shi, Gang Chen, Álvaro Serra Gómez, Siyuan Wu, Javier Alonso-Mora