Explicit Regularization
Explicit regularization in machine learning involves adding penalty terms to optimization objectives to constrain model complexity and improve generalization. Recent research focuses on understanding the interplay between explicit and implicit regularization, particularly in deep learning models like convolutional neural networks and transformers, and how this interplay affects performance in various tasks such as image restoration, matrix completion, and continual learning. This research is crucial for developing more robust and efficient machine learning algorithms, addressing challenges like overfitting and catastrophic forgetting, and ultimately improving the reliability and performance of AI systems across diverse applications.
Papers
November 7, 2024
November 2, 2024
September 23, 2024
August 19, 2024
June 17, 2024
June 12, 2024
April 28, 2024
February 16, 2024
February 9, 2024
February 1, 2024
January 18, 2024
November 29, 2023
November 10, 2023
August 14, 2023
June 12, 2023
June 1, 2023
March 29, 2023
December 31, 2022
December 20, 2022