Paper ID: 2204.12181
Collaborative Target Search with a Visual Drone Swarm: An Adaptive Curriculum Embedded Multistage Reinforcement Learning Approach
Jiaping Xiao, Phumrapee Pisutsin, Mir Feroskhan
Equipping drones with target search capabilities is highly desirable for applications in disaster rescue and smart warehouse delivery systems. Multiple intelligent drones that can collaborate with each other and maneuver among obstacles show more effectiveness in accomplishing tasks in a shorter amount of time. However, carrying out collaborative target search (CTS) without prior target information is extremely challenging, especially with a visual drone swarm. In this work, we propose a novel data-efficient deep reinforcement learning (DRL) approach called adaptive curriculum embedded multistage learning (ACEMSL) to address these challenges, mainly 3-D sparse reward space exploration with limited visual perception and collaborative behavior requirements. Specifically, we decompose the CTS task into several subtasks including individual obstacle avoidance, target search, and inter-agent collaboration, and progressively train the agents with multistage learning. Meanwhile, an adaptive embedded curriculum (AEC) is designed, where the task difficulty level (TDL) can be adaptively adjusted based on the success rate (SR) achieved in training. ACEMSL allows data-efficient training and individual-team reward allocation for the visual drone swarm. Furthermore, we deploy the trained model over a real visual drone swarm and perform CTS operations without fine-tuning. Extensive simulations and real-world flight tests validate the effectiveness and generalizability of ACEMSL. The project is available at https://github.com/NTU-UAVG/CTS-visual-drone-swarm.git.
Submitted: Apr 26, 2022