Gradient Information
Gradient information, the rate of change of a function's output with respect to its inputs, is central to many machine learning algorithms, serving as the foundation for optimization and model interpretation. Current research focuses on improving gradient-based optimization methods, particularly in distributed settings like federated learning, and leveraging gradient information for tasks such as model compression, anomaly detection, and enhanced model explainability. These advancements are crucial for improving the efficiency, robustness, and trustworthiness of machine learning models across diverse applications, from biomedical image analysis to large language model fine-tuning.
Papers
June 17, 2022
June 13, 2022
May 20, 2022
April 10, 2022
March 29, 2022
February 15, 2022
January 31, 2022
January 22, 2022
November 30, 2021