Collision Avoidance
Collision avoidance research focuses on enabling safe and efficient navigation for multiple agents, such as robots, UAVs, and spacecraft, in dynamic environments. Current efforts concentrate on developing robust control strategies, often employing model predictive control (MPC) frameworks integrated with control barrier functions (CBFs) or reinforcement learning (RL) algorithms, sometimes enhanced by techniques like diffusion models or neural networks for improved perception and planning. These advancements are crucial for various applications, including autonomous driving, multi-robot coordination, and space operations, improving safety and efficiency in increasingly complex systems. The field is also exploring distributed control methods and human-robot collaboration to address challenges in communication limitations and shared autonomy.
Papers
Evaluating the Benefit of Using Multiple Low-Cost Forward-Looking Sonar Beams for Collision Avoidance in Small AUVs
Christopher Morency, Daniel J. Stilwell
Smooth Trajectory Collision Avoidance through Deep Reinforcement Learning
Sirui Song, Kirk Saunders, Ye Yue, Jundong Liu
Decentralized Planning for Car-Like Robotic Swarm in Cluttered Environments
Changjia Ma, Zhichao Han, Tingrui Zhang, Jingping Wang, Long Xu, Chengyang Li, Chao Xu, Fei Gao
Generalization in Deep Reinforcement Learning for Robotic Navigation by Reward Shaping
Victor R. F. Miranda, Armando A. Neto, Gustavo M. Freitas, Leonardo A. Mozelli
Obstacle Identification and Ellipsoidal Decomposition for Fast Motion Planning in Unknown Dynamic Environments
Mehmetcan Kaymaz, Nazim Kemal Ure