Paper ID: 2410.14685

Leveraging Event Streams with Deep Reinforcement Learning for End-to-End UAV Tracking

Ala Souissi (Lab-STICC\_RAMBO, IMT Atlantique - INFO), Hajer Fradi (Lab-STICC\_RAMBO, IMT Atlantique - INFO), Panagiotis Papadakis (Lab-STICC\_RAMBO, IMT Atlantique - INFO)

In this paper, we present our proposed approach for active tracking to increase the autonomy of Unmanned Aerial Vehicles (UAVs) using event cameras, low-energy imaging sensors that offer significant advantages in speed and dynamic range. The proposed tracking controller is designed to respond to visual feedback from the mounted event sensor, adjusting the drone movements to follow the target. To leverage the full motion capabilities of a quadrotor and the unique properties of event sensors, we propose an end-to-end deep-reinforcement learning (DRL) framework that maps raw sensor data from event streams directly to control actions for the UAV. To learn an optimal policy under highly variable and challenging conditions, we opt for a simulation environment with domain randomization for effective transfer to real-world environments. We demonstrate the effectiveness of our approach through experiments in challenging scenarios, including fast-moving targets and changing lighting conditions, which result in improved generalization capabilities.

Submitted: Oct 3, 2024