Regularization Model
Regularization models aim to improve the performance and robustness of machine learning models by constraining their complexity and preventing overfitting. Current research focuses on understanding the impact of various regularization techniques—including L1, L2, and other non-convex penalties, label smoothing, and data augmentation methods—on model calibration, robustness to noise and adversarial attacks, and generalization to unseen data, often within the context of specific architectures like GANs, ResNets, and RNNs. These investigations are crucial for advancing the reliability and applicability of machine learning across diverse fields, from medical imaging and natural language processing to control systems and inverse problems. Improved regularization strategies lead to more accurate, stable, and efficient models, particularly in scenarios with limited data or high dimensionality.
Papers
Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation
Zakhar Shumaylov, Jeremy Budd, Subhadip Mukherjee, Carola-Bibiane Schönlieb
Plug-and-Play image restoration with Stochastic deNOising REgularization
Marien Renaud, Jean Prost, Arthur Leclaire, Nicolas Papadakis