Paper ID: 2407.11751

Why long model-based rollouts are no reason for bad Q-value estimates

Philipp Wissmann, Daniel Hein, Steffen Udluft, Volker Tresp

This paper explores the use of model-based offline reinforcement learning with long model rollouts. While some literature criticizes this approach due to compounding errors, many practitioners have found success in real-world applications. The paper aims to demonstrate that long rollouts do not necessarily result in exponentially growing errors and can actually produce better Q-value estimates than model-free methods. These findings can potentially enhance reinforcement learning techniques.

Submitted: Jul 16, 2024