Learned Environment Model

Learned environment models are computational representations of an agent's surroundings, built from data, to improve the efficiency and robustness of reinforcement learning and related tasks. Current research focuses on using these models for tasks like preference elicitation in offline reinforcement learning, continual learning in dynamic environments, and improving the efficiency of gait synthesis in robotics. These models are proving valuable in addressing challenges such as partial observability, limited data, and the need for efficient exploration in complex scenarios, ultimately advancing the capabilities of autonomous agents in various applications.

Papers