Smooth Nonconvex

Smooth nonconvex optimization focuses on finding minima of functions that are both smooth (possessing continuous derivatives) and nonconvex (lacking a single global minimum). Current research emphasizes developing efficient algorithms, such as stochastic gradient descent (SGD) variants (including variance-reduced and accelerated methods) and random reshuffling, to escape saddle points and converge to local minima, often with high probability guarantees. These advancements are crucial for tackling challenging problems in machine learning, including neural network training and empirical risk minimization, where nonconvexity is prevalent, leading to improved model performance and training efficiency.

Papers