Model Based Reinforcement Learning
Model-based reinforcement learning (MBRL) aims to improve the sample efficiency and robustness of reinforcement learning agents by learning a model of the environment's dynamics. Current research focuses on enhancing model accuracy and robustness through techniques like incorporating expert knowledge, using bisimulation metrics for state representation, and employing adversarial training to handle uncertainties. These advancements are leading to more efficient and reliable control policies in various applications, including robotics, autonomous driving, and even protein design, by reducing the need for extensive real-world interactions during training.
Papers
DODT: Enhanced Online Decision Transformer Learning through Dreamer's Actor-Critic Trajectory Forecasting
Eric Hanchen Jiang, Zhi Zhang, Dinghuai Zhang, Andrew Lizarraga, Chenheng Xu, Yasi Zhang, Siyan Zhao, Zhengjie Xu, Peiyu Yu, Yuer Tang, Deqian Kong, Ying Nian Wu
Bayes Adaptive Monte Carlo Tree Search for Offline Model-based Reinforcement Learning
Jiayu Chen, Wentse Chen, Jeff Schneider
Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models
Jacob Levy, Tyler Westenbroek, David Fridovich-Keil
Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient
Wenlong Wang, Ivana Dusparic, Yucheng Shi, Ke Zhang, Vinny Cahill
SOLD: Reinforcement Learning with Slot Object-Centric Latent Dynamics
Malte Mosbach, Jan Niklas Ewertz, Angel Villar-Corrales, Sven Behnke