Minimum Norm

Minimum-norm interpolation focuses on finding solutions to machine learning problems that minimize a specific norm of the model's parameters, often leading to interpolating solutions with zero training error. Current research investigates the generalization performance of these interpolators, particularly in transfer learning settings and under covariate shift, using linear models and shallow neural networks as primary frameworks. Understanding the implicit biases of optimization algorithms towards minimum-norm solutions and their relationship to explicit regularization techniques like weight decay and adversarial training is a key focus, with implications for both theoretical understanding of generalization and the design of robust and efficient machine learning algorithms.

Papers