Paper ID: 2406.04724
Probabilistic Perspectives on Error Minimization in Adversarial Reinforcement Learning
Roman Belaire, Arunesh Sinha, Pradeep Varakantham
Deep Reinforcement Learning (DRL) policies are highly susceptible to adversarial noise in observations, which poses significant risks in safety-critical scenarios. For instance, a self-driving car could experience catastrophic consequences if its sensory inputs about traffic signs are manipulated by an adversary. The core challenge in such situations is that the true state of the environment becomes only partially observable due to these adversarial manipulations. Two key strategies have so far been employed in the literature; the first set of methods focuses on increasing the likelihood that nearby states--those close to the true state--share the same robust actions. The second set of approaches maximize the value for the worst possible true state within the range of adversarially perturbed observations. Although these approaches provide strong robustness against attacks, they tend to be either overly conservative or not generalizable. We hypothesize that the shortcomings of these approaches stem from their failure to explicitly account for partial observability. By making decisions that directly consider this partial knowledge of the true state, we believe it is possible to achieve a better balance between robustness and performance, particularly in adversarial settings. To achieve this, we introduce a novel objective called Adversarial Counterfactual Error (ACoE), which is defined on the beliefs about the underlying true state and naturally balances value optimization with robustness against adversarial attacks, and a theoretically-grounded, scalable surrogate objective Cumulative-ACoE (C-ACoE). Our empirical evaluations demonstrate that our method significantly outperforms current state-of-the-art approaches for addressing adversarial RL challenges, offering a promising direction for better DRL under adversarial conditions.
Submitted: Jun 7, 2024