Functional Regularization

Functional regularization is a technique used to improve the generalization and stability of machine learning models by directly controlling the properties of the learned function, rather than solely relying on data fitting. Current research focuses on developing novel regularization schemes, including adversarial methods and those based on Fourier transforms or learned feature embeddings, often applied within generative models like Non-negative Matrix Factorization (NMF) or deep neural networks. These advancements address issues like spectral bias in neural networks and catastrophic forgetting in continual learning, leading to improved performance in various applications such as source separation, object counting, and reinforcement learning. The impact lies in enhancing the robustness and efficiency of machine learning algorithms across diverse domains.

Papers