Unbounded Gradient
Unbounded gradients, where the gradient of a function can grow arbitrarily large, pose a significant challenge in optimization problems frequently encountered in machine learning and other fields. Current research focuses on developing and analyzing optimization algorithms like Adam, RMSProp, and AdaGrad, adapting them to handle unbounded gradients while maintaining convergence guarantees, often under relaxed assumptions such as affine noise variance. This work is crucial for improving the robustness and efficiency of training complex models, particularly in scenarios with noisy data or ill-conditioned problems, and has implications for various applications including reinforcement learning and federated learning.
Papers
April 1, 2024
February 21, 2024
November 3, 2023
October 31, 2023
June 21, 2023
February 17, 2023
October 3, 2022
September 29, 2022
April 4, 2022
February 11, 2022