Gradient Method
Gradient methods are iterative optimization algorithms that aim to find the minimum (or maximum) of a function by repeatedly stepping in the direction of the negative (or positive) gradient. Current research focuses on improving the efficiency and robustness of these methods, particularly for non-convex problems arising in deep learning and other applications, exploring variations like stochastic gradient descent, adaptive methods (e.g., Adam), and incorporating second-order information or preconditioning techniques. These advancements are crucial for training complex models, enabling progress in fields like scientific machine learning and improving the performance of various machine learning tasks.
Papers
Adapting Step-size: A Unified Perspective to Analyze and Improve Gradient-based Methods for Adversarial Attacks
Wei Tao, Lei Bao, Sheng Long, Gaowei Wu, Qing Tao
Goal-Image Conditioned Dynamic Cable Manipulation through Bayesian Inference and Multi-Objective Black-Box Optimization
Kuniyuki Takahashi, Tadahiro Taniguchi