Nonsmooth Nonconvex
Nonsmooth nonconvex optimization focuses on finding the minimum of functions that are both non-differentiable and lack a single, well-defined minimum. Current research emphasizes developing and analyzing efficient algorithms, such as variants of stochastic gradient descent and Lagrangian-based methods, often incorporating techniques like random reshuffling and coordinate descent to handle large-scale problems. These advancements are crucial for tackling challenges in diverse fields including deep learning, where nonsmooth nonconvex functions frequently arise in model training, leading to improved convergence rates and stability. The development of robust and efficient algorithms for this class of problems is driving progress in various applications.