Gradient Descent
Gradient descent is an iterative optimization algorithm used to find the minimum of a function by repeatedly taking steps proportional to the negative of the gradient. Current research focuses on improving its efficiency and robustness, particularly in high-dimensional spaces and with non-convex functions, exploring variations like stochastic gradient descent, proximal methods, and natural gradient descent, often within the context of deep learning models and other complex architectures. These advancements are crucial for training increasingly complex machine learning models and improving their performance in various applications, from image recognition to scientific simulations. A key area of investigation involves understanding and mitigating issues like vanishing/exploding gradients, overfitting, and the impact of data characteristics on convergence.
Papers
The Differentiable Feasibility Pump
Matteo Cacciola, Alexandre Forel, Antonio Frangioni, Andrea Lodi
ATM: Improving Model Merging by Alternating Tuning and Merging
Luca Zhou, Daniele Solombrino, Donato Crisostomi, Maria Sofia Bucarelli, Fabrizio Silvestri, Emanuele RodolĂ
Gradient Descent Finds Over-Parameterized Neural Networks with Sharp Generalization for Nonparametric Regression: A Distribution-Free Analysis
Yingzhen Yang, Ping Li