Non Convex Optimization
Non-convex optimization tackles the challenge of finding optimal solutions in scenarios where the objective function possesses multiple local minima, hindering straightforward gradient-based approaches. Current research emphasizes developing efficient algorithms, such as adaptive methods (like AdaGrad and Adam) and stochastic gradient descent variants, that can escape saddle points and converge to good local minima, often employing techniques like regularization and variance reduction. This field is crucial for advancing machine learning, particularly deep learning and other high-dimensional applications, by enabling the training of complex models and improving their performance and scalability.
Papers
October 28, 2024
October 22, 2024
October 19, 2024
October 2, 2024
September 17, 2024
September 8, 2024
August 13, 2024
August 3, 2024
July 1, 2024
June 24, 2024
June 23, 2024
June 4, 2024
April 27, 2024
April 1, 2024
March 22, 2024
January 23, 2024
January 17, 2024
January 10, 2024
January 9, 2024