Gradient Leakage Attack

Gradient leakage attacks exploit the information inadvertently revealed in model gradients shared during federated learning, enabling reconstruction of sensitive training data. Current research focuses on improving the efficiency of these attacks, developing defenses such as gradient perturbation and data obfuscation techniques, and analyzing the vulnerabilities of specific model architectures like transformers and convolutional neural networks. Understanding and mitigating gradient leakage is crucial for ensuring the privacy and security of federated learning systems, impacting the widespread adoption of this collaborative machine learning paradigm.

Papers