Gradient Based
Gradient-based methods are central to training and interpreting many machine learning models, aiming to optimize model parameters and understand their decision-making processes. Current research focuses on improving the efficiency and robustness of gradient-based optimization, particularly within federated learning, and developing novel gradient-informed sampling techniques for enhanced model performance and explainability. These advancements are crucial for scaling machine learning to larger datasets and more complex tasks, impacting fields ranging from medical image analysis to natural language processing and optimization problems.
Papers
TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization
Alan Jeffares, Tennison Liu, Jonathan Crabbé, Fergus Imrie, Mihaela van der Schaar
Scalable Stochastic Gradient Riemannian Langevin Dynamics in Non-Diagonal Metrics
Hanlin Yu, Marcelo Hartmann, Bernardo Williams, Arto Klami
Gradient-Based Automated Iterative Recovery for Parameter-Efficient Tuning
Maximilian Mozes, Tolga Bolukbasi, Ann Yuan, Frederick Liu, Nithum Thain, Lucas Dixon
Optimizing CT Scan Geometries With and Without Gradients
Mareike Thies, Fabian Wagner, Noah Maul, Laura Pfaff, Linda-Sophie Schneider, Christopher Syben, Andreas Maier