Kernel Hilbert Space
Reproducing Kernel Hilbert Spaces (RKHS) are infinite-dimensional function spaces used in machine learning to represent and analyze functions, enabling the application of powerful mathematical tools to complex data. Current research focuses on leveraging RKHS in Bayesian inference, optimization, and anomaly detection, often employing algorithms like kernel quantile regression and kernel mean embeddings, as well as exploring connections between RKHS and neural networks. This framework offers a powerful theoretical foundation for developing robust and efficient algorithms across diverse applications, including energy forecasting, optimal control, and functional data analysis, while also providing insights into the generalization capabilities of neural networks.
Papers
How many Neurons do we need? A refined Analysis for Shallow Networks trained with Gradient Descent
Mike Nguyen, Nicole Mücke
Rates of Convergence in Certain Native Spaces of Approximations used in Reinforcement Learning
Ali Bouland, Shengyuan Niu, Sai Tej Paruchuri, Andrew Kurdila, John Burns, Eugenio Schuster