Weakly Convex

Weakly convex optimization focuses on minimizing or maximizing functions that violate the strict convexity or concavity requirements of standard optimization methods. Current research emphasizes developing and analyzing algorithms, such as stochastic gradient descent and proximal methods, for solving weakly convex problems arising in machine learning, particularly in minimax and compositional settings, and bilevel optimization. These advancements are crucial for tackling non-convex challenges in areas like robust model training, hyperparameter tuning, and domain adaptation, leading to improved efficiency and theoretical guarantees for these important applications.

Papers