First Order Bayesian Optimization
First-order Bayesian optimization (FOBO) is a powerful technique for efficiently finding the maximum of expensive-to-evaluate functions by leveraging both function and gradient information. Current research focuses on improving scalability to high-dimensional problems, often employing Gaussian processes (GPs) as surrogate models, but also exploring alternative approximations like Vecchia approximations and randomized neural networks to address computational limitations. These advancements enable FOBO's application in diverse fields, including hyperparameter tuning in machine learning and policy optimization in reinforcement learning, where efficient exploration of complex search spaces is crucial.
Papers
July 5, 2024
June 20, 2023
June 19, 2023
February 14, 2023
June 16, 2022