Model Based Reinforcement Learning
Model-based reinforcement learning (MBRL) aims to improve the sample efficiency and robustness of reinforcement learning agents by learning a model of the environment's dynamics. Current research focuses on enhancing model accuracy and robustness through techniques like incorporating expert knowledge, using bisimulation metrics for state representation, and employing adversarial training to handle uncertainties. These advancements are leading to more efficient and reliable control policies in various applications, including robotics, autonomous driving, and even protein design, by reducing the need for extensive real-world interactions during training.
Papers
A New View on Planning in Online Reinforcement Learning
Kevin Roice, Parham Mohammad Panahi, Scott M. Jordan, Adam White, Martha White
Adaptive Layer Splitting for Wireless LLM Inference in Edge Computing: A Model-Based Reinforcement Learning Approach
Yuxuan Chen, Rongpeng Li, Xiaoxue Yu, Zhifeng Zhao, Honggang Zhang
Neuromorphic dreaming: A pathway to efficient learning in artificial agents
Ingo Blakowski, Dmitrii Zendrikov, Cristiano Capone, Giacomo Indiveri
Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search
Nicola Dainese, Matteo Merler, Minttu Alakuijala, Pekka Marttinen
iVideoGPT: Interactive VideoGPTs are Scalable World Models
Jialong Wu, Shaofeng Yin, Ningya Feng, Xu He, Dong Li, Jianye Hao, Mingsheng Long