Weakly Convex
Weakly convex optimization focuses on minimizing or maximizing functions that violate the strict convexity or concavity requirements of standard optimization methods. Current research emphasizes developing and analyzing algorithms, such as stochastic gradient descent and proximal methods, for solving weakly convex problems arising in machine learning, particularly in minimax and compositional settings, and bilevel optimization. These advancements are crucial for tackling non-convex challenges in areas like robust model training, hyperparameter tuning, and domain adaptation, leading to improved efficiency and theoretical guarantees for these important applications.
Papers
September 20, 2024
August 22, 2024
May 28, 2024
November 15, 2023
August 30, 2023
June 29, 2023
May 23, 2023