Regularization Model
Regularization models aim to improve the performance and robustness of machine learning models by constraining their complexity and preventing overfitting. Current research focuses on understanding the impact of various regularization techniques—including L1, L2, and other non-convex penalties, label smoothing, and data augmentation methods—on model calibration, robustness to noise and adversarial attacks, and generalization to unseen data, often within the context of specific architectures like GANs, ResNets, and RNNs. These investigations are crucial for advancing the reliability and applicability of machine learning across diverse fields, from medical imaging and natural language processing to control systems and inverse problems. Improved regularization strategies lead to more accurate, stable, and efficient models, particularly in scenarios with limited data or high dimensionality.
Papers
On sparse regression, Lp-regularization, and automated model discovery
Jeremy A. McCulloch, Skyler R. St. Pierre, Kevin Linka, Ellen Kuhl
GReAT: A Graph Regularized Adversarial Training Method
Samet Bayram, Kenneth Barner
Increasing Entropy to Boost Policy Gradient Performance on Personalization Tasks
Andrew Starnes, Anton Dereventsov, Clayton Webster