Weight Regularization
Weight regularization is a technique used to improve the performance and generalization of machine learning models, primarily by constraining the magnitude or distribution of model parameters (weights). Current research focuses on applying weight regularization to address challenges in various domains, including offline policy learning, self-supervised learning, and continual learning, often employing techniques like importance weighting regularization, L1/L2 regularization, and adaptive weight modification. These advancements aim to mitigate issues such as high variance in estimators, catastrophic forgetting, and dimensional collapse, ultimately leading to more robust and reliable models across diverse applications.
Papers
Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization
Francesco Pelosin, Saurav Jha, Andrea Torsello, Bogdan Raducanu, Joost van de Weijer
Repairing Group-Level Errors for DNNs Using Weighted Regularization
Ziyuan Zhong, Yuchi Tian, Conor J. Sweeney, Vicente Ordonez, Baishakhi Ray