Collision Avoidance
Collision avoidance research focuses on enabling safe and efficient navigation for multiple agents, such as robots, UAVs, and spacecraft, in dynamic environments. Current efforts concentrate on developing robust control strategies, often employing model predictive control (MPC) frameworks integrated with control barrier functions (CBFs) or reinforcement learning (RL) algorithms, sometimes enhanced by techniques like diffusion models or neural networks for improved perception and planning. These advancements are crucial for various applications, including autonomous driving, multi-robot coordination, and space operations, improving safety and efficiency in increasingly complex systems. The field is also exploring distributed control methods and human-robot collaboration to address challenges in communication limitations and shared autonomy.
Papers
Disentangling Uncertainty for Safe Social Navigation using Deep Reinforcement Learning
Daniel Flögel, Marcos Gómez Villafañe, Joshua Ransiek, Sören Hohmann
A hierarchical framework for collision avoidance in robot-assisted minimally invasive surgery
Jacinto Colan, Ana Davila, Khusniddin Fozilov, Yasuhisa Hasegawa
Multi-Agent Obstacle Avoidance using Velocity Obstacles and Control Barrier Functions
Alejandro Sánchez Roncero, Rafael I. Cabral Muchacho, Petter Ögren
Mission Planning on Autonomous Avoidance for Spacecraft Confronting Orbital Debris
Chen Xingwen, Wang Tong, Qiu Jianbin, Feng Jianbo