Paper ID: 2409.13108
Disentangling Recognition and Decision Regrets in Image-Based Reinforcement Learning
Alihan Hüyük, Arndt Ryo Koblitz, Atefeh Mohajeri, Matthew Andrews
In image-based reinforcement learning (RL), policies usually operate in two steps: first extracting lower-dimensional features from raw images (the "recognition" step), and then taking actions based on the extracted features (the "decision" step). Extracting features that are spuriously correlated with performance or irrelevant for decision-making can lead to poor generalization performance, known as observational overfitting in image-based RL. In such cases, it can be hard to quantify how much of the error can be attributed to poor feature extraction vs. poor decision-making. In order to disentangle the two sources of error, we introduce the notions of recognition regret and decision regret. Using these notions, we characterize and disambiguate the two distinct causes behind observational overfitting: over-specific representations, which include features that are not needed for optimal decision-making (leading to high decision regret), vs. under-specific representations, which only include a limited set of features that were spuriously correlated with performance during training (leading to high recognition regret). Finally, we provide illustrative examples of observational overfitting due to both over-specific and under-specific representations in maze environments as well as the Atari game Pong.
Submitted: Sep 19, 2024