Gaussian Process
Gaussian processes (GPs) are probabilistic models used for function approximation and uncertainty quantification, offering a powerful framework for various applications. Current research focuses on extending GPs' capabilities through novel architectures like deep GPs and hybrid models combining GPs with neural networks or other machine learning techniques, addressing scalability and computational efficiency challenges, particularly in high-dimensional or time-varying settings. These advancements are significantly impacting fields like robotics, control systems, and scientific modeling by providing robust, uncertainty-aware predictions and enabling more reliable decision-making in complex systems. The development of efficient algorithms and theoretical analyses further enhances the practical applicability and trustworthiness of GP-based methods.
Papers
Generalised Gaussian Process Latent Variable Models (GPLVM) with Stochastic Variational Inference
Vidhi Lalchand, Aditya Ravuri, Neil D. Lawrence
Learning Multi-Task Gaussian Process Over Heterogeneous Input Domains
Haitao Liu, Kai Wu, Yew-Soon Ong, Chao Bian, Xiaomo Jiang, Xiaofang Wang
Learning Invariant Weights in Neural Networks
Tycho F. A. van der Ouderaa, Mark van der Wilk