Model Gradient
Model gradients, the derivatives of a machine learning model's parameters with respect to its loss function, are central to training and are increasingly studied for their implications in privacy and model interpretability. Current research focuses on leveraging gradients for tasks like dataset condensation to accelerate hyperparameter search, improving federated learning efficiency through informed pruning and gradient aggregation techniques, and understanding how gradients reveal information about training data, leading to novel attacks and defenses. These investigations are crucial for enhancing the security and explainability of machine learning models, particularly in privacy-sensitive applications like federated learning.
Papers
Gradients Stand-in for Defending Deep Leakage in Federated Learning
H. Yi, H. Ren, C. Hu, Y. Li, J. Deng, X. Xie
GPR Full-Waveform Inversion through Adaptive Filtering of Model Parameters and Gradients Using CNN
Peng Jiang, Kun Wang, Jiaxing Wang, Zeliang Feng, Shengjie Qiao, Runhuai Deng, Fengkai Zhang