Kernel Approximation
Kernel approximation techniques aim to efficiently represent complex kernel functions, thereby accelerating computations and enabling the application of kernel methods to large datasets. Current research focuses on developing faster approximation algorithms, such as random Fourier features and localized methods, and extending these techniques to handle diverse kernel types, including asymmetric and indefinite kernels, within various machine learning frameworks like Gaussian processes and support vector machines. These advancements are crucial for scaling kernel methods to real-world applications in areas such as TinyML, reinforcement learning, and modeling complex systems, improving both computational efficiency and model performance.
Papers
November 5, 2024
October 31, 2024
October 30, 2024
October 27, 2024
October 22, 2024
September 25, 2024
August 22, 2024
December 3, 2023
September 6, 2023
July 25, 2023
July 4, 2023
February 22, 2023
October 15, 2022
September 18, 2022
September 11, 2022
August 10, 2022
May 2, 2022
April 21, 2022
April 12, 2022