Gradient Perturbation

Gradient perturbation involves adding noise to gradients during model training, primarily to enhance privacy in federated learning or improve model generalization by escaping sharp minima. Current research focuses on optimizing perturbation strategies, including developing algorithms like Sharpness-Aware Minimization (SAM) and its variants, and exploring the interplay between perturbation and model architectures like graph transformers and recurrent neural networks. These techniques are crucial for balancing privacy protection with model accuracy and interpretability in various applications, particularly in sensitive domains like healthcare and finance where data privacy is paramount.

Papers