Non Convex Optimization

Non-convex optimization tackles the challenge of finding optimal solutions in scenarios where the objective function possesses multiple local minima, hindering straightforward gradient-based approaches. Current research emphasizes developing efficient algorithms, such as adaptive methods (like AdaGrad and Adam) and stochastic gradient descent variants, that can escape saddle points and converge to good local minima, often employing techniques like regularization and variance reduction. This field is crucial for advancing machine learning, particularly deep learning and other high-dimensional applications, by enabling the training of complex models and improving their performance and scalability.

Papers