Nonsmooth Optimization

Nonsmooth optimization tackles the challenge of minimizing functions with non-differentiable points, a common occurrence in many machine learning and control problems. Current research focuses on developing and analyzing efficient algorithms, such as subgradient methods, primal-dual approaches, and Adam-family adaptations, often incorporating techniques like proximal gradient steps and coordinate descent to handle the complexities of nonsmoothness and nonconvexity. These advancements improve the ability to solve challenging optimization problems arising in diverse applications, including robust control, matrix factorization, and training of nonsmooth neural networks, leading to more effective and robust solutions in these fields.

Papers