L BFGS
L-BFGS (Limited-memory Broyden–Fletcher–Goldfarb–Shanno) is a quasi-Newton optimization algorithm used extensively in machine learning and other fields requiring efficient optimization of complex functions. Current research focuses on improving L-BFGS's performance in large-scale settings, particularly for training deep neural networks, through techniques like momentum-based modifications and preconditioning strategies, often incorporating distributed computing for scalability. These advancements aim to enhance the speed and stability of L-BFGS, leading to more efficient training of machine learning models and improved solutions for problems in system identification and other applications where high-dimensional optimization is crucial.
Papers
May 28, 2024
March 6, 2024
February 12, 2024
July 25, 2023
June 30, 2023
October 26, 2022