Gradient Based
Gradient-based methods are central to training and interpreting many machine learning models, aiming to optimize model parameters and understand their decision-making processes. Current research focuses on improving the efficiency and robustness of gradient-based optimization, particularly within federated learning, and developing novel gradient-informed sampling techniques for enhanced model performance and explainability. These advancements are crucial for scaling machine learning to larger datasets and more complex tasks, impacting fields ranging from medical image analysis to natural language processing and optimization problems.
168papers
Papers - Page 3
November 1, 2024
October 29, 2024
October 24, 2024
October 16, 2024
October 15, 2024
October 14, 2024
October 13, 2024
October 10, 2024
September 19, 2024
September 18, 2024
GRIN: GRadient-INformed MoE
Liyuan Liu, Young Jin Kim, Shuohang Wang, Chen Liang, Yelong Shen, Hao Cheng, Xiaodong Liu, Masahiro Tanaka, Xiaoxia Wu, Wenxiang Hu+7A Unified Framework for Neural Computation and Learning Over Time
Stefano Melacci, Alessandro Betti, Michele Casoni, Tommaso Guidi, Matteo Tiezzi, Marco Gori
September 3, 2024
September 2, 2024
August 29, 2024