Gradient Information
Gradient information, the rate of change of a function's output with respect to its inputs, is central to many machine learning algorithms, serving as the foundation for optimization and model interpretation. Current research focuses on improving gradient-based optimization methods, particularly in distributed settings like federated learning, and leveraging gradient information for tasks such as model compression, anomaly detection, and enhanced model explainability. These advancements are crucial for improving the efficiency, robustness, and trustworthiness of machine learning models across diverse applications, from biomedical image analysis to large language model fine-tuning.
Papers
February 2, 2024
January 15, 2024
October 31, 2023
October 26, 2023
October 23, 2023
October 21, 2023
July 19, 2023
June 20, 2023
May 20, 2023
May 4, 2023
April 1, 2023
March 7, 2023
February 5, 2023
January 2, 2023
December 15, 2022
November 30, 2022
October 15, 2022
September 22, 2022
September 13, 2022