Norm Regularization
Norm regularization is a technique used in machine learning to constrain model complexity and improve generalization by penalizing large parameter values, often promoting sparsity or low-rank structures. Current research focuses on developing efficient algorithms for various norm types (e.g., ℓ₀, ℓ₁, ℓ₂, ℓ₂,ₚ) within different model architectures (e.g., neural networks, support vector machines, tensor factorization), addressing challenges like non-convexity and high dimensionality. These advancements are significant for improving the performance and interpretability of machine learning models across diverse applications, including image processing, financial modeling, and continual learning.
Papers
October 8, 2024
July 24, 2024
July 17, 2024
July 4, 2024
June 10, 2024
June 3, 2024
May 14, 2024
May 3, 2024
February 4, 2024
July 22, 2023
July 18, 2023
June 24, 2023
June 16, 2023
October 6, 2022
September 22, 2022
July 13, 2022