Representer Theorem

The Representer Theorem establishes that solutions to many regularized learning problems can be expressed as a linear combination of kernel functions evaluated at the training data points, simplifying model complexity. Current research extends this theorem to encompass deeper neural networks, exploring both reproducing kernel Hilbert spaces (RKHS) and reproducing kernel Banach spaces (RKBS) to better understand the function spaces defined by these architectures, including feedforward and residual networks. This work aims to provide a more rigorous theoretical foundation for understanding neural network learning, particularly in non-overparameterized regimes, and to connect theoretical results to biologically plausible learning mechanisms. The resulting insights have implications for model interpretability, algorithm design, and the development of more efficient and effective machine learning methods.

Papers