Nonsmooth Nonconvex Optimization

Nonsmooth nonconvex optimization tackles the challenging problem of finding minima in functions that are both non-differentiable and lack the convenient properties of convex functions. Current research focuses on developing and analyzing first-order methods, including variants of stochastic gradient descent and augmented Lagrangian approaches, often tailored to specific problem structures like bilevel optimization or those arising in training neural networks (e.g., with ReLU activations and sparsity-inducing penalties). These advancements aim to provide efficient algorithms with provable convergence guarantees, even in high-dimensional settings, impacting diverse fields such as machine learning, robust control, and signal processing. The development of efficient and provably convergent algorithms is crucial for solving large-scale real-world problems where traditional methods often fail.

Papers