Paper ID: 2410.11221
Multi-objective Reinforcement Learning: A Tool for Pluralistic Alignment
Peter Vamplew, Conor F Hayes, Cameron Foale, Richard Dazeley, Hadassah Harland
Reinforcement learning (RL) is a valuable tool for the creation of AI systems. However it may be problematic to adequately align RL based on scalar rewards if there are multiple conflicting values or stakeholders to be considered. Over the last decade multi-objective reinforcement learning (MORL) using vector rewards has emerged as an alternative to standard, scalar RL. This paper provides an overview of the role which MORL can play in creating pluralistically-aligned AI.
Submitted: Oct 15, 2024