Regularization Technique
Regularization techniques in machine learning aim to prevent overfitting and improve model generalization by constraining model complexity or modifying the training process. Current research focuses on developing novel regularization methods tailored to specific model architectures (e.g., convolutional neural networks, language models, variational quantum circuits) and learning paradigms (e.g., reinforcement learning, continual learning), often investigating their impact on model robustness, calibration, and privacy. These advancements are significant because improved generalization and robustness are crucial for deploying reliable and trustworthy machine learning models across diverse applications, from medical imaging to finance.
Papers
November 19, 2023
November 13, 2023
November 6, 2023
October 10, 2023
August 10, 2023
July 11, 2023
June 6, 2023
June 2, 2023
May 29, 2023
May 25, 2023
April 24, 2023
April 14, 2023
April 4, 2023
March 31, 2023
March 29, 2023
March 14, 2023
February 13, 2023
February 6, 2023
December 14, 2022