Gradient Based
Gradient-based methods are central to training and interpreting many machine learning models, aiming to optimize model parameters and understand their decision-making processes. Current research focuses on improving the efficiency and robustness of gradient-based optimization, particularly within federated learning, and developing novel gradient-informed sampling techniques for enhanced model performance and explainability. These advancements are crucial for scaling machine learning to larger datasets and more complex tasks, impacting fields ranging from medical image analysis to natural language processing and optimization problems.
Papers
June 18, 2024
June 17, 2024
June 14, 2024
June 13, 2024
June 10, 2024
June 9, 2024
June 3, 2024
June 1, 2024
May 30, 2024
May 29, 2024
May 22, 2024
May 18, 2024
May 10, 2024
April 22, 2024
April 16, 2024
April 9, 2024
March 25, 2024
March 22, 2024
March 16, 2024