Non Convex Optimization
Non-convex optimization tackles the challenge of finding optimal solutions in scenarios where the objective function possesses multiple local minima, hindering straightforward gradient-based approaches. Current research emphasizes developing efficient algorithms, such as adaptive methods (like AdaGrad and Adam) and stochastic gradient descent variants, that can escape saddle points and converge to good local minima, often employing techniques like regularization and variance reduction. This field is crucial for advancing machine learning, particularly deep learning and other high-dimensional applications, by enabling the training of complex models and improving their performance and scalability.
Papers
October 23, 2023
October 18, 2023
October 13, 2023
October 12, 2023
September 23, 2023
September 15, 2023
August 21, 2023
August 10, 2023
July 14, 2023
July 9, 2023
June 28, 2023
June 26, 2023
June 17, 2023
June 11, 2023
June 2, 2023
June 1, 2023
April 7, 2023
March 2, 2023
February 28, 2023