Separable Kernel

Separable kernels are a class of functions used in machine learning to measure similarity between data points, enabling efficient computation in kernel methods. Current research focuses on developing new kernel architectures, such as those based on spectral truncation and localized integral/differential operators, to improve model expressiveness and computational efficiency, particularly for high-dimensional data and non-Euclidean spaces. This work addresses challenges like over-smoothing in global kernel methods and the difficulty of constructing positive-definite kernels for complex data, leading to advancements in areas such as Gaussian process regression and solving partial differential equations. Improved algorithms for kernel learning, including convex optimization techniques, are also enhancing the scalability and accuracy of kernel-based machine learning models.

Papers