Order Gradient
Order gradient methods, primarily focusing on second-order gradients (and sometimes higher-order), aim to improve the efficiency and stability of optimization algorithms across diverse applications. Current research emphasizes the development and analysis of these methods within machine learning, particularly for training deep neural networks and generative models, with a focus on addressing computational challenges and improving convergence rates. The improved optimization offered by these techniques has significant implications for various fields, including scientific computing, design space exploration, and the development of more robust and efficient machine learning models.
Papers
October 8, 2024
May 22, 2024
March 2, 2024
October 28, 2023
August 16, 2023
August 11, 2023
April 20, 2023
October 11, 2022
September 3, 2022
August 14, 2022
July 4, 2022
June 14, 2022
May 23, 2022
April 8, 2022
February 27, 2022