Cubic Regularization
Cubic regularization is a family of second-order optimization methods designed to accelerate convergence to optimal solutions, particularly for non-convex problems prevalent in machine learning. Current research focuses on developing parameter-free variants, improving efficiency through techniques like subsampling and momentum acceleration, and extending the approach to handle constrained optimization (e.g., on Riemannian manifolds) and minimax problems. These advancements enhance the applicability of cubic regularization to high-dimensional problems in various fields, including reinforcement learning and deep neural networks, by offering faster convergence and improved scalability compared to first-order methods.
Papers
July 4, 2024
June 24, 2024
April 21, 2023
February 22, 2023
October 12, 2022
September 27, 2022
April 19, 2022