Paper ID: 2302.01310
Knowledge Gradient for Multi-Objective Bayesian Optimization with Decoupled Evaluations
Jack M. Buckingham, Sebastian Rojas Gonzalez, Juergen Branke
Multi-objective Bayesian optimization aims to find the Pareto front of trade-offs between a set of expensive objectives while collecting as few samples as possible. In some cases, it is possible to evaluate the objectives separately, and a different latency or evaluation cost can be associated with each objective. This decoupling of the objectives presents an opportunity to learn the Pareto front faster by avoiding unnecessary, expensive evaluations. We propose a scalarization based knowledge gradient acquisition function which accounts for the different evaluation costs of the objectives. We prove asymptotic consistency of the estimator of the optimum for an arbitrary, D-dimensional, real compact search space and show empirically that the algorithm performs comparably with the state of the art and significantly outperforms versions which always evaluate both objectives.
Submitted: Feb 2, 2023