Explicit Regularization
Explicit regularization in machine learning involves adding penalty terms to optimization objectives to constrain model complexity and improve generalization. Recent research focuses on understanding the interplay between explicit and implicit regularization, particularly in deep learning models like convolutional neural networks and transformers, and how this interplay affects performance in various tasks such as image restoration, matrix completion, and continual learning. This research is crucial for developing more robust and efficient machine learning algorithms, addressing challenges like overfitting and catastrophic forgetting, and ultimately improving the reliability and performance of AI systems across diverse applications.
Papers
December 14, 2022
December 1, 2022
August 11, 2022
July 20, 2022
June 16, 2022
June 9, 2022
June 8, 2022
May 25, 2022
May 18, 2022
March 7, 2022
March 6, 2022
February 22, 2022
February 11, 2022
January 27, 2022
December 9, 2021