Nonsmooth Regularization

Nonsmooth regularization addresses the challenge of optimizing objective functions containing non-differentiable components, frequently arising in machine learning and signal processing. Current research focuses on developing efficient algorithms, such as proximal gradient methods and variants of ADMM, to handle these complexities within various model architectures, including neural networks and federated learning settings. These advancements improve the training of models with sparsity-inducing regularizers, leading to enhanced performance in applications like matrix completion, image reconstruction, and structured neural network training. The resulting improvements in model efficiency and accuracy have significant implications across diverse scientific and engineering domains.

Papers