L2 Regularization
L2 regularization, a technique that penalizes large weights in machine learning models, aims to prevent overfitting and improve generalization. Current research focuses on understanding its impact on training dynamics in various architectures, including deep neural networks and transformers, and its role in feature selection and domain adaptation. Studies explore how L2 regularization affects the learned representations, impacting aspects like dimensionality, sparsity, and the balance between low-dimensional features and model complexity. This work has implications for improving model performance, interpretability, and robustness across diverse applications.
Papers
October 31, 2024
June 11, 2024
May 13, 2024
May 1, 2024
January 31, 2024
December 7, 2023
November 14, 2023
August 23, 2023
May 30, 2023
May 17, 2023
January 18, 2023
September 29, 2022
July 7, 2022
June 4, 2022
May 31, 2022
May 15, 2022