Visual Model Based

Visual model-based reinforcement learning (RL) aims to enable robots to learn complex tasks directly from visual observations, improving sample efficiency and generalization compared to model-free methods. Current research focuses on addressing challenges like the sim-to-real gap, improving robustness to spurious visual variations, and enhancing exploration efficiency through techniques such as demonstration-augmented learning, latent state representation learning, and information prioritization. These advancements are crucial for deploying RL agents in real-world scenarios, particularly in robotics and manipulation tasks where efficient learning from visual input is essential.

Papers