Paper ID: 2407.16025
Exploring and Addressing Reward Confusion in Offline Preference Learning
Xin Chen, Sam Toyer, Florian Shkurti
Spurious correlations in a reward model's training data can prevent Reinforcement Learning from Human Feedback (RLHF) from identifying the desired goal and induce unwanted behaviors. This paper shows that offline RLHF is susceptible to reward confusion, especially in the presence of spurious correlations in offline data. We create a benchmark to study this problem and propose a method that can significantly reduce reward confusion by leveraging transitivity of preferences while building a global preference chain with active learning.
Submitted: Jul 22, 2024