Paper ID: 2402.08062

Avoiding Catastrophe in Continuous Spaces by Asking for Help

Benjamin Plaut, Hanlin Zhu, Stuart Russell

Most reinforcement learning algorithms with formal regret guarantees assume all mistakes are reversible and essentially rely on trying all possible behaviors. This approach leads to poor outcomes when some mistakes are irreparable or even catastrophic. We propose a variant of the contextual bandit problem where the goal is to minimize the chance of catastrophe. Specifically, we assume that the payoff each round represents the chance of avoiding catastrophe that round, and try to maximize the product of payoffs (the overall chance of avoiding catastrophe). We allow a limited number of queries to a mentor and assume a Lipschitz continuous payoff function. We first show that in general, any algorithm either constantly queries the mentor or is nearly guaranteed to cause catastrophe. However, when the mentor policy class has bounded Natarajan dimension and contains at least some "reasonable" policies, we provide an algorithm whose regret and rate of querying the mentor both approach 0 as the time horizon grows. We also present an alternative algorithm which provides the same regret and query guarantees when the mentor's action changes a constant number of times in a 1D state space, and can handle adversarially chosen states.

Submitted: Feb 12, 2024