Kernel Method
Kernel methods are a powerful class of algorithms that leverage kernel functions to perform computations in high-dimensional feature spaces implicitly, enabling nonlinear modeling without explicitly representing the high-dimensional data. Current research focuses on improving the scalability of kernel methods through techniques like random Fourier features and Nyström approximations, as well as exploring their application in diverse areas such as quantum system simulation, dynamical systems modeling, and density estimation. These advancements enhance the efficiency and applicability of kernel methods across various scientific disciplines and practical applications, offering improved accuracy and computational feasibility for complex data analysis tasks.
Papers
Kernel Regression with Infinite-Width Neural Networks on Millions of Examples
Ben Adlam, Jaehoon Lee, Shreyas Padhy, Zachary Nado, Jasper Snoek
Fast kernel methods for Data Quality Monitoring as a goodness-of-fit test
Gaia Grosso, Nicolò Lai, Marco Letizia, Jacopo Pazzini, Marco Rando, Lorenzo Rosasco, Andrea Wulzer, Marco Zanetti