Gaussian Process
Gaussian processes (GPs) are probabilistic models used for function approximation and uncertainty quantification, offering a powerful framework for various applications. Current research focuses on extending GPs' capabilities through novel architectures like deep GPs and hybrid models combining GPs with neural networks or other machine learning techniques, addressing scalability and computational efficiency challenges, particularly in high-dimensional or time-varying settings. These advancements are significantly impacting fields like robotics, control systems, and scientific modeling by providing robust, uncertainty-aware predictions and enabling more reliable decision-making in complex systems. The development of efficient algorithms and theoretical analyses further enhances the practical applicability and trustworthiness of GP-based methods.
Papers
The generalized Hierarchical Gaussian Filter
Lilian Aline Weber, Peter Thestrup Waade, Nicolas Legrand, Anna Hedvig Møller, Klaas Enno Stephan, Christoph Mathys
Physics Inspired Approaches To Understanding Gaussian Processes
Maximilian P. Niroomand, Luke Dicks, Edward O. Pyzer-Knapp, David J. Wales
Posterior Inference on Shallow Infinitely Wide Bayesian Neural Networks under Weights with Unbounded Variance
Jorge Loría, Anindya Bhadra
Learning Switching Port-Hamiltonian Systems with Uncertainty Quantification
Thomas Beckers, Tom Z. Jiahao, George J. Pappas
Gaussian Process Port-Hamiltonian Systems: Bayesian Learning with Physics Prior
Thomas Beckers, Jacob Seidman, Paris Perdikaris, George J. Pappas
Mastering the exploration-exploitation trade-off in Bayesian Optimization
Antonio Candelieri