Gradient Based
Gradient-based methods are central to training and interpreting many machine learning models, aiming to optimize model parameters and understand their decision-making processes. Current research focuses on improving the efficiency and robustness of gradient-based optimization, particularly within federated learning, and developing novel gradient-informed sampling techniques for enhanced model performance and explainability. These advancements are crucial for scaling machine learning to larger datasets and more complex tasks, impacting fields ranging from medical image analysis to natural language processing and optimization problems.
Papers
Intractability of Learning the Discrete Logarithm with Gradient-Based Methods
Rustem Takhanov, Maxat Tezekbayev, Artur Pak, Arman Bolatov, Zhibek Kadyrsizova, Zhenisbek Assylbekov
Nature Inspired Evolutionary Swarm Optimizers for Biomedical Image and Signal Processing -- A Systematic Review
Subhrangshu Adhikary
Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, Sarath Chandar
PLiNIO: A User-Friendly Library of Gradient-based Methods for Complexity-aware DNN Optimization
Daniele Jahier Pagliari, Matteo Risso, Beatrice Alessandra Motetti, Alessio Burrello