Gaussian Process
Gaussian processes (GPs) are probabilistic models used for function approximation and uncertainty quantification, offering a powerful framework for various applications. Current research focuses on extending GPs' capabilities through novel architectures like deep GPs and hybrid models combining GPs with neural networks or other machine learning techniques, addressing scalability and computational efficiency challenges, particularly in high-dimensional or time-varying settings. These advancements are significantly impacting fields like robotics, control systems, and scientific modeling by providing robust, uncertainty-aware predictions and enabling more reliable decision-making in complex systems. The development of efficient algorithms and theoretical analyses further enhances the practical applicability and trustworthiness of GP-based methods.
Papers
Scaling Gaussian Processes for Learning Curve Prediction via Latent Kronecker Structure
Jihao Andreas Lin, Sebastian Ament, Maximilian Balandat, Eytan Bakshy
Batched Energy-Entropy acquisition for Bayesian Optimization
Felix Teufel, Carsten Stahlhut, Jesper Ferkinghoff-Borg
Calibrated Computation-Aware Gaussian Processes
Disha Hegde, Mohamed Adil, Jon Cockayne
Learning to Race in Extreme Turning Scene with Active Exploration and Gaussian Process Regression-based MPC
Guoqiang Wu, Cheng Hu, Wangjia Weng, Zhouheng Li, Yonghao Fu, Lei Xie, Hongye Su
Robust Transfer Learning for Active Level Set Estimation with Locally Adaptive Gaussian Process Prior
Giang Ngo, Dang Nguyen, Sunil Gupta