Unbounded Gradient

Unbounded gradients, where the gradient of a function can grow arbitrarily large, pose a significant challenge in optimization problems frequently encountered in machine learning and other fields. Current research focuses on developing and analyzing optimization algorithms like Adam, RMSProp, and AdaGrad, adapting them to handle unbounded gradients while maintaining convergence guarantees, often under relaxed assumptions such as affine noise variance. This work is crucial for improving the robustness and efficiency of training complex models, particularly in scenarios with noisy data or ill-conditioned problems, and has implications for various applications including reinforcement learning and federated learning.

Papers