Quasi Newton
Quasi-Newton methods are iterative optimization algorithms that approximate the Hessian matrix to efficiently find minima of functions, offering a balance between speed and computational cost compared to first-order and full second-order methods. Current research focuses on improving their scalability and stability for large-scale problems, particularly in deep learning and distributed computing, through techniques like limited-memory BFGS (L-BFGS), momentum-based modifications, and novel update schemes such as symmetric rank-1 updates. These advancements are significant because they enable efficient optimization in diverse applications, including neural network training, large-scale statistical analysis, and solving complex optimization problems where computing the exact Hessian is intractable.