Gradient Descent
Gradient descent is an iterative optimization algorithm used to find the minimum of a function by repeatedly taking steps proportional to the negative of the gradient. Current research focuses on improving its efficiency and robustness, particularly in high-dimensional spaces and with non-convex functions, exploring variations like stochastic gradient descent, proximal methods, and natural gradient descent, often within the context of deep learning models and other complex architectures. These advancements are crucial for training increasingly complex machine learning models and improving their performance in various applications, from image recognition to scientific simulations. A key area of investigation involves understanding and mitigating issues like vanishing/exploding gradients, overfitting, and the impact of data characteristics on convergence.
Papers
Implicit Regularization of Gradient Flow on One-Layer Softmax Attention
Heejune Sheen, Siyu Chen, Tianhao Wang, Harrison H. Zhou
Machine Learning Optimized Orthogonal Basis Piecewise Polynomial Approximation
Hannes Waclawek, Stefan Huber
Mean-Field Microcanonical Gradient Descent
Marcus Häggbom, Morten Karlsmark, Joakim Andén