Regularization Parameter
Regularization parameters control the balance between fitting training data and preventing overfitting in various machine learning models. Current research focuses on optimizing these parameters, exploring methods like bilevel optimization and data-driven approaches, often within the context of specific model architectures such as neural networks and kernel methods. Effective regularization parameter selection is crucial for improving model generalization, impacting fields ranging from image processing and genetics to causal inference and robust prediction under data shifts. The development of efficient and theoretically grounded methods for determining optimal regularization parameters remains a significant area of active investigation.
Papers
Variable Selection in Maximum Mean Discrepancy for Interpretable Distribution Comparison
Kensuke Mitsuzawa, Motonobu Kanagawa, Stefano Bortoli, Margherita Grossi, Paolo Papotti
Convergent plug-and-play with proximal denoiser and unconstrained regularization parameter
Samuel Hurault, Antonin Chambolle, Arthur Leclaire, Nicolas Papadakis