Nonsmooth Optimization
Nonsmooth optimization tackles the challenge of minimizing functions with non-differentiable points, a common occurrence in many machine learning and control problems. Current research focuses on developing and analyzing efficient algorithms, such as subgradient methods, primal-dual approaches, and Adam-family adaptations, often incorporating techniques like proximal gradient steps and coordinate descent to handle the complexities of nonsmoothness and nonconvexity. These advancements improve the ability to solve challenging optimization problems arising in diverse applications, including robust control, matrix factorization, and training of nonsmooth neural networks, leading to more effective and robust solutions in these fields.
Papers
November 10, 2024
October 12, 2024
August 29, 2024
March 24, 2024
February 18, 2024
August 30, 2023
August 23, 2023
July 10, 2023
May 23, 2023
May 6, 2023
March 8, 2023
January 4, 2023
December 6, 2022
October 20, 2022
September 21, 2022
June 30, 2022
June 23, 2022
February 8, 2022
November 30, 2021