Gradient Perturbation
Gradient perturbation involves adding noise to gradients during model training, primarily to enhance privacy in federated learning or improve model generalization by escaping sharp minima. Current research focuses on optimizing perturbation strategies, including developing algorithms like Sharpness-Aware Minimization (SAM) and its variants, and exploring the interplay between perturbation and model architectures like graph transformers and recurrent neural networks. These techniques are crucial for balancing privacy protection with model accuracy and interpretability in various applications, particularly in sensitive domains like healthcare and finance where data privacy is paramount.
Papers
October 8, 2024
July 7, 2024
June 19, 2024
May 29, 2024
February 5, 2024
February 16, 2023
August 9, 2022
June 27, 2022
June 15, 2022
May 22, 2022
May 1, 2022