L2 Regularization

L2 regularization, a technique that penalizes large weights in machine learning models, aims to prevent overfitting and improve generalization. Current research focuses on understanding its impact on training dynamics in various architectures, including deep neural networks and transformers, and its role in feature selection and domain adaptation. Studies explore how L2 regularization affects the learned representations, impacting aspects like dimensionality, sparsity, and the balance between low-dimensional features and model complexity. This work has implications for improving model performance, interpretability, and robustness across diverse applications.

Papers