Gradient Information
Gradient information, the rate of change of a function's output with respect to its inputs, is central to many machine learning algorithms, serving as the foundation for optimization and model interpretation. Current research focuses on improving gradient-based optimization methods, particularly in distributed settings like federated learning, and leveraging gradient information for tasks such as model compression, anomaly detection, and enhanced model explainability. These advancements are crucial for improving the efficiency, robustness, and trustworthiness of machine learning models across diverse applications, from biomedical image analysis to large language model fine-tuning.
Papers
September 29, 2024
September 25, 2024
September 24, 2024
September 21, 2024
September 18, 2024
September 12, 2024
September 6, 2024
August 7, 2024
August 2, 2024
July 18, 2024
July 1, 2024
June 21, 2024
June 19, 2024
February 28, 2024
February 25, 2024
February 21, 2024
February 15, 2024
February 5, 2024
February 2, 2024