Deep Reinforcement Learning
Deep reinforcement learning (DRL) aims to train agents to make optimal decisions in complex environments by learning through trial and error. Current research focuses on improving DRL's robustness, sample efficiency, and interpretability, often employing architectures like Proximal Policy Optimization (PPO), deep Q-networks (DQNs), and graph neural networks (GNNs) to address challenges in diverse applications such as robotics, game playing, and resource management. The resulting advancements have significant implications for various fields, enabling the development of more adaptable and efficient autonomous systems across numerous domains.
Papers
Neural DNF-MT: A Neuro-symbolic Approach for Learning Interpretable and Editable Policies
Kexin Gu Baugh, Luke Dickens, Alessandra Russo
Rethinking Adversarial Attacks in Reinforcement Learning from Policy Distribution Perspective
Tianyang Duan, Zongyuan Zhang, Zheng Lin, Yue Gao, Ling Xiong, Yong Cui, Hongbin Liang, Xianhao Chen, Heming Cui, Dong Huang
Co-Activation Graph Analysis of Safety-Verified and Explainable Deep Reinforcement Learning Policies
Dennis Gross, Helge Spieker
Sim-to-Real Transfer for Mobile Robots with Reinforcement Learning: from NVIDIA Isaac Sim to Gazebo and Real ROS 2 Robots
Sahar Salimpour, Jorge Peña-Queralta, Diego Paez-Granados, Jukka Heikkonen, Tomi Westerlund
Image Classification with Deep Reinforcement Active Learning
Mingyuan Jiu, Xuguang Song, Hichem Sahbi, Shupan Li, Yan Chen, Wei Guo, Lihua Guo, Mingliang Xu
Numerical solutions of fixed points in two-dimensional Kuramoto-Sivashinsky equation expedited by reinforcement learning
Juncheng Jiang, Dongdong Wan, Mengqi Zhang