Paper ID: 2408.09686

Algorithmic Contract Design with Reinforcement Learning Agents

David Molina Concha, Kyeonghyeon Park, Hyun-Rok Lee, Taesik Lee, Chi-Guhn Lee

We introduce a novel problem setting for algorithmic contract design, named the principal-MARL contract design problem. This setting extends traditional contract design to account for dynamic and stochastic environments using Markov Games and Multi-Agent Reinforcement Learning. To tackle this problem, we propose a Multi-Objective Bayesian Optimization (MOBO) framework named Constrained Pareto Maximum Entropy Search (cPMES). Our approach integrates MOBO and MARL to explore the highly constrained contract design space, identifying promising incentive and recruitment decisions. cPMES transforms the principal-MARL contract design problem into an unconstrained multi-objective problem, leveraging the probability of feasibility as part of the objectives and ensuring promising designs predicted on the feasibility border are included in the Pareto front. By focusing the entropy prediction on designs within the Pareto set, cPMES mitigates the risk of the search strategy being overwhelmed by entropy from constraints. We demonstrate the effectiveness of cPMES through extensive benchmark studies in synthetic and simulated environments, showing its ability to find feasible contract designs that maximize the principal's objectives. Additionally, we provide theoretical support with a sub-linear regret bound concerning the number of iterations.

Submitted: Aug 19, 2024