Convex Composite Optimization

Convex composite optimization focuses on efficiently minimizing objective functions composed of smooth and non-smooth convex terms, a common structure in machine learning and other fields. Current research emphasizes developing and analyzing algorithms like proximal gradient methods, augmented Lagrangian methods, and variants of Newton's method, often incorporating techniques such as variance reduction, momentum, and preconditioning to improve convergence rates and handle stochasticity. These advancements are significant because they enable the solution of large-scale optimization problems arising in diverse applications, including machine learning model training and signal processing, with improved speed and robustness.

Papers