Gradient Information
Gradient information, the rate of change of a function's output with respect to its inputs, is central to many machine learning algorithms, serving as the foundation for optimization and model interpretation. Current research focuses on improving gradient-based optimization methods, particularly in distributed settings like federated learning, and leveraging gradient information for tasks such as model compression, anomaly detection, and enhanced model explainability. These advancements are crucial for improving the efficiency, robustness, and trustworthiness of machine learning models across diverse applications, from biomedical image analysis to large language model fine-tuning.
48papers
Papers
February 13, 2025
February 10, 2025
December 17, 2024
September 24, 2024
September 21, 2024
September 12, 2024
September 6, 2024
June 19, 2024