Gaussian Process
Gaussian processes (GPs) are probabilistic models used for function approximation and uncertainty quantification, offering a powerful framework for various applications. Current research focuses on extending GPs' capabilities through novel architectures like deep GPs and hybrid models combining GPs with neural networks or other machine learning techniques, addressing scalability and computational efficiency challenges, particularly in high-dimensional or time-varying settings. These advancements are significantly impacting fields like robotics, control systems, and scientific modeling by providing robust, uncertainty-aware predictions and enabling more reliable decision-making in complex systems. The development of efficient algorithms and theoretical analyses further enhances the practical applicability and trustworthiness of GP-based methods.
Papers
Projecting basis functions with tensor networks for Gaussian process regression
Clara Menzen, Eva Memmel, Kim Batselier, Manon Kok
Stochastic Gradient Descent for Gaussian Processes Done Right
Jihao Andreas Lin, Shreyas Padhy, Javier Antorán, Austin Tripp, Alexander Terenin, Csaba Szepesvári, José Miguel Hernández-Lobato, David Janz
Topological Learning for Motion Data via Mixed Coordinates
Hengrui Luo, Jisu Kim, Alice Patania, Mikael Vejdemo-Johansson
Hodge-Compositional Edge Gaussian Processes
Maosheng Yang, Viacheslav Borovitskiy, Elvin Isufi
Distributed multi-agent magnetic field norm SLAM with Gaussian processes
Frida Viset, Rudy Helmons, Manon Kok