Convex Composite Optimization
Convex composite optimization focuses on efficiently minimizing objective functions composed of smooth and non-smooth convex terms, a common structure in machine learning and other fields. Current research emphasizes developing and analyzing algorithms like proximal gradient methods, augmented Lagrangian methods, and variants of Newton's method, often incorporating techniques such as variance reduction, momentum, and preconditioning to improve convergence rates and handle stochasticity. These advancements are significant because they enable the solution of large-scale optimization problems arising in diverse applications, including machine learning model training and signal processing, with improved speed and robustness.
Papers
August 28, 2024
March 5, 2024
February 14, 2024
September 4, 2023
August 28, 2023
March 28, 2023
March 1, 2023
February 20, 2023
February 7, 2023
January 9, 2023
November 3, 2022
October 25, 2022
September 5, 2022
August 11, 2022
August 2, 2022
May 25, 2022
May 11, 2022
March 4, 2022
February 26, 2022