Smooth Nonconvex
Smooth nonconvex optimization focuses on finding minima of functions that are both smooth (possessing continuous derivatives) and nonconvex (lacking a single global minimum). Current research emphasizes developing efficient algorithms, such as stochastic gradient descent (SGD) variants (including variance-reduced and accelerated methods) and random reshuffling, to escape saddle points and converge to local minima, often with high probability guarantees. These advancements are crucial for tackling challenging problems in machine learning, including neural network training and empirical risk minimization, where nonconvexity is prevalent, leading to improved model performance and training efficiency.
Papers
November 10, 2024
September 29, 2024
November 20, 2023
November 15, 2023
October 28, 2023
July 13, 2023
June 4, 2023
April 28, 2023
February 9, 2023
July 12, 2022
June 5, 2022
March 29, 2022