Online Newton

Online Newton methods are second-order optimization algorithms designed to efficiently solve online convex optimization problems, particularly in high-dimensional settings like deep learning. Current research focuses on improving computational efficiency through techniques like sparsification and avoiding computationally expensive projections, leading to algorithms like Sparsified Online Newton and variants employing damped steps or self-concordant barriers. These advancements enable the application of online Newton methods to large-scale problems, such as training massive neural networks and online portfolio selection, offering significant improvements in convergence speed and predictive performance compared to first-order methods. The resulting gains in efficiency and accuracy have broad implications for machine learning and related fields.

Papers