Paper ID: 2503.08740 • Published Mar 11, 2025
Cooperative Bearing-Only Target Pursuit via Multiagent Reinforcement Learning: Design and Experiment
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
This paper addresses the multi-robot pursuit problem for an unknown target,
encompassing both target state estimation and pursuit control. First, in state
estimation, we focus on using only bearing information, as it is readily
available from vision sensors and effective for small, distant targets.
Challenges such as instability due to the nonlinearity of bearing measurements
and singularities in the two-angle representation are addressed through a
proposed uniform bearing-only information filter. This filter integrates
multiple 3D bearing measurements, provides a concise formulation, and enhances
stability and resilience to target loss caused by limited field of view (FoV).
Second, in target pursuit control within complex environments, where challenges
such as heterogeneity and limited FoV arise, conventional methods like
differential games or Voronoi partitioning often prove inadequate. To address
these limitations, we propose a novel multiagent reinforcement learning (MARL)
framework, enabling multiple heterogeneous vehicles to search, localize, and
follow a target while effectively handling those challenges. Third, to bridge
the sim-to-real gap, we propose two key techniques: incorporating adjustable
low-level control gains in training to replicate the dynamics of real-world
autonomous ground vehicles (AGVs), and proposing spectral-normalized RL
algorithms to enhance policy smoothness and robustness. Finally, we demonstrate
the successful zero-shot transfer of the MARL controllers to AGVs, validating
the effectiveness and practical feasibility of our approach. The accompanying
video is available at this https URL
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.