Non Convex Optimization
Non-convex optimization tackles the challenge of finding optimal solutions in scenarios where the objective function possesses multiple local minima, hindering straightforward gradient-based approaches. Current research emphasizes developing efficient algorithms, such as adaptive methods (like AdaGrad and Adam) and stochastic gradient descent variants, that can escape saddle points and converge to good local minima, often employing techniques like regularization and variance reduction. This field is crucial for advancing machine learning, particularly deep learning and other high-dimensional applications, by enabling the training of complex models and improving their performance and scalability.
Papers
February 21, 2023
February 20, 2023
February 15, 2023
February 7, 2023
October 25, 2022
October 6, 2022
October 5, 2022
September 29, 2022
September 27, 2022
September 18, 2022
September 17, 2022
September 11, 2022
September 8, 2022
August 17, 2022
June 22, 2022
June 6, 2022
May 31, 2022
May 19, 2022
April 22, 2022
April 11, 2022