Obstacle Avoidance
Obstacle avoidance research focuses on enabling robots and autonomous systems to safely navigate complex environments by generating collision-free trajectories. Current efforts concentrate on developing robust control strategies, often employing model predictive control (MPC), control barrier functions (CBFs), and deep reinforcement learning (DRL), sometimes integrated with advanced perception techniques like ray tracing and sensor fusion. These advancements are crucial for improving the safety and efficiency of autonomous systems in various applications, from warehouse logistics and industrial automation to assistive robotics and aerospace.
Papers
Dynamic Obstacle Avoidance through Uncertainty-Based Adaptive Planning with Diffusion
Vineet Punyamoorty, Pascal Jutras-Dubé, Ruqi Zhang, Vaneet Aggarwal, Damon Conover, Aniket Bera
Learning Bipedal Walking for Humanoid Robots in Challenging Environments with Obstacle Avoidance
Marwan Hamze (LISV), Mitsuharu Morisawa (AIST), Eiichi Yoshida (CNRS-AIST JRL)
Deep Reinforcement Learning-based Obstacle Avoidance for Robot Movement in Warehouse Environments
Keqin Li, Jiajing Chen, Denzhi Yu, Tao Dajun, Xinyu Qiu, Lian Jieting, Sun Baiwei, Zhang Shengyuan, Zhenyu Wan, Ran Ji, Bo Hong, Fanghao Ni
Like a Martial Arts Dodge: Safe Expeditious Whole-Body Control of Mobile Manipulators for Collision Avoidance
Bingjie Chen, Houde Liu, Chongkun Xia, Liang Han, Xueqian Wang, Bin Liang
A Fairness-Oriented Control Framework for Safety-Critical Multi-Robot Systems: Alternative Authority Control
Lei Shi, Qichao Liu, Cheng Zhou, Xiong Li
Bearing-Distance Based Flocking with Zone-Based Interactions
Hossein B. Jond
Mission Planning on Autonomous Avoidance for Spacecraft Confronting Orbital Debris
Chen Xingwen, Wang Tong, Qiu Jianbin, Feng Jianbo