Natural Gradient
Natural gradients are a powerful tool in optimization, aiming to improve the efficiency and stability of training complex models by accounting for the underlying geometry of the parameter space. Current research focuses on applying natural gradient methods to diverse areas, including distributed learning (e.g., through gradient compression and efficient client selection), inverse problems (using diffusion models), and neural network training (e.g., via regularization and novel optimizers like DiffGrad and AdEMAMix). These advancements have significant implications for improving the performance and robustness of machine learning models across various applications, from image processing and medical image analysis to scientific computing and federated learning.
Papers
Thermodynamic Natural Gradient Descent
Kaelan Donatella, Samuel Duffield, Maxwell Aifer, Denis Melanson, Gavin Crooks, Patrick J. Coles
Challenging Gradient Boosted Decision Trees with Tabular Transformers for Fraud Detection at Booking.com
Sergei Krutikov, Bulat Khaertdinov, Rodion Kiriukhin, Shubham Agrawal, Kees Jan De Vries