Robust Deep Reinforcement Learning
Robust deep reinforcement learning (DRL) aims to create AI agents capable of performing reliably in unpredictable environments, addressing the fragility of standard DRL methods to variations in conditions or adversarial attacks. Current research focuses on enhancing robustness through techniques like adversarial training, risk-sensitive algorithms (e.g., employing exponential criteria or quantile regression), and adaptive perturbation methods that dynamically adjust training difficulty. These advancements are crucial for deploying DRL in real-world applications, such as autonomous driving and robotics, where safety and reliability are paramount, and for improving the generalizability and trustworthiness of DRL models.
Papers
October 15, 2024
August 13, 2024
June 23, 2024
June 12, 2024
May 20, 2024
March 1, 2024
February 15, 2024
January 4, 2024
January 1, 2024
April 25, 2023
April 14, 2023
February 14, 2023
December 18, 2022
June 21, 2022
November 13, 2021