Regularization Model
Regularization models aim to improve the performance and robustness of machine learning models by constraining their complexity and preventing overfitting. Current research focuses on understanding the impact of various regularization techniques—including L1, L2, and other non-convex penalties, label smoothing, and data augmentation methods—on model calibration, robustness to noise and adversarial attacks, and generalization to unseen data, often within the context of specific architectures like GANs, ResNets, and RNNs. These investigations are crucial for advancing the reliability and applicability of machine learning across diverse fields, from medical imaging and natural language processing to control systems and inverse problems. Improved regularization strategies lead to more accurate, stable, and efficient models, particularly in scenarios with limited data or high dimensionality.
Papers
Learning sparsity-promoting regularizers for linear inverse problems
Giovanni S. Alberti, Ernesto De Vito, Tapio Helin, Matti Lassas, Luca Ratti, Matteo Santacesaria
From Model Based to Learned Regularization in Medical Image Registration: A Comprehensive Review
Anna Reithmeir, Veronika Spieker, Vasiliki Sideri-Lampretsa, Daniel Rueckert, Julia A. Schnabel, Veronika A. Zimmer