Gaussian Process
Gaussian processes (GPs) are probabilistic models used for function approximation and uncertainty quantification, offering a powerful framework for various applications. Current research focuses on extending GPs' capabilities through novel architectures like deep GPs and hybrid models combining GPs with neural networks or other machine learning techniques, addressing scalability and computational efficiency challenges, particularly in high-dimensional or time-varying settings. These advancements are significantly impacting fields like robotics, control systems, and scientific modeling by providing robust, uncertainty-aware predictions and enabling more reliable decision-making in complex systems. The development of efficient algorithms and theoretical analyses further enhances the practical applicability and trustworthiness of GP-based methods.
Papers
Distributional Gaussian Processes Layers for Out-of-Distribution Detection
Sebastian G. Popescu, David J. Sharp, James H. Cole, Konstantinos Kamnitsas, Ben Glocker
Supernova Light Curves Approximation based on Neural Network Models
Mariia Demianenko, Ekaterina Samorodova, Mikhail Sysak, Aleksandr Shiriaev, Konstantin Malanchev, Denis Derkach, Mikhail Hushchyn
Bayesian Active Learning for Scanning Probe Microscopy: from Gaussian Processes to Hypothesis Learning
Maxim Ziatdinov, Yongtao Liu, Kyle Kelley, Rama Vasudevan, Sergei V. Kalinin
Posterior and Computational Uncertainty in Gaussian Processes
Jonathan Wenger, Geoff Pleiss, Marvin Pförtner, Philipp Hennig, John P. Cunningham
Efficient Transformed Gaussian Processes for Non-Stationary Dependent Multi-class Classification
Juan Maroñas, Daniel Hernández-Lobato