Cubic Regularization

Cubic regularization is a family of second-order optimization methods designed to accelerate convergence to optimal solutions, particularly for non-convex problems prevalent in machine learning. Current research focuses on developing parameter-free variants, improving efficiency through techniques like subsampling and momentum acceleration, and extending the approach to handle constrained optimization (e.g., on Riemannian manifolds) and minimax problems. These advancements enhance the applicability of cubic regularization to high-dimensional problems in various fields, including reinforcement learning and deep neural networks, by offering faster convergence and improved scalability compared to first-order methods.

Papers