Continuous Gradient
Continuous gradient methods are central to optimization problems across diverse fields, aiming to efficiently find minima of complex functions by iteratively following gradient directions. Current research focuses on improving gradient estimation and optimization algorithms, including addressing issues like noisy gradients in neural networks, escaping local optima in graph neural network attacks, and developing parameter-free or adaptive step-size strategies for gradient descent. These advancements enhance the efficiency and robustness of optimization, impacting areas such as machine learning, computer vision, and quantum computing by enabling faster training and improved model performance.
Papers
June 29, 2024
June 19, 2024
November 15, 2023
August 1, 2023
April 27, 2023
February 8, 2023
January 19, 2023
October 12, 2022
September 26, 2022
September 5, 2022
August 26, 2022
June 2, 2022
May 10, 2022
December 9, 2021