State Adversarial
State adversarial research focuses on enhancing the robustness of reinforcement learning (RL) agents against attacks that subtly manipulate their input observations (states). Current work investigates this problem across various RL architectures, including Q-learning and multi-agent systems like QMIX, often employing adversarial training techniques or exploring alternative policy optimization methods to improve resilience. This research is crucial for deploying RL agents in real-world scenarios where adversarial manipulations could have significant consequences, impacting the reliability and safety of autonomous systems and other applications.
Papers
October 15, 2024
April 16, 2024
February 20, 2024
February 3, 2024
July 3, 2023
May 4, 2023