Black Box Function
Black-box function optimization focuses on efficiently finding optimal inputs for functions whose internal workings are unknown, requiring iterative evaluations to guide the search. Current research emphasizes developing robust and sample-efficient algorithms, employing diverse models like Gaussian processes, neural networks (including physics-informed and interpretable architectures), diffusion models, and generative models within Bayesian optimization frameworks. These advancements are crucial for tackling computationally expensive problems across various fields, including engineering design, hyperparameter tuning, and scientific experimentation, where direct analytical optimization is infeasible. The ultimate goal is to minimize the number of expensive function evaluations needed to locate optimal or near-optimal solutions.
Papers
Interpretable Architecture Neural Networks for Function Visualization
Shengtong Zhang, Daniel W. Apley
Bayesian Optimization over High-Dimensional Combinatorial Spaces via Dictionary-based Embeddings
Aryan Deshwal, Sebastian Ament, Maximilian Balandat, Eytan Bakshy, Janardhan Rao Doppa, David Eriksson
Neural-BO: A Black-box Optimization Algorithm using Deep Neural Networks
Dat Phan-Trong, Hung Tran-The, Sunil Gupta
Robust expected improvement for Bayesian optimization
Ryan B. Christianson, Robert B. Gramacy
Trieste: Efficiently Exploring The Depths of Black-box Functions with TensorFlow
Victor Picheny, Joel Berkeley, Henry B. Moss, Hrvoje Stojic, Uri Granta, Sebastian W. Ober, Artem Artemev, Khurram Ghani, Alexander Goodall, Andrei Paleyes, Sattar Vakili, Sergio Pascual-Diaz, Stratis Markou, Jixiang Qing, Nasrulloh R. B. S Loka, Ivo Couckuyt
Unleashing the Potential of Acquisition Functions in High-Dimensional Bayesian Optimization
Jiayu Zhao, Renyu Yang, Shenghao Qiu, Zheng Wang