Gradient Leakage Attack
Gradient leakage attacks exploit the information inadvertently revealed in model gradients shared during federated learning, enabling reconstruction of sensitive training data. Current research focuses on improving the efficiency of these attacks, developing defenses such as gradient perturbation and data obfuscation techniques, and analyzing the vulnerabilities of specific model architectures like transformers and convolutional neural networks. Understanding and mitigating gradient leakage is crucial for ensuring the privacy and security of federated learning systems, impacting the widespread adoption of this collaborative machine learning paradigm.
Papers
September 19, 2024
July 7, 2024
April 15, 2024
November 22, 2023
September 8, 2023
May 10, 2023
May 6, 2023
December 5, 2022
October 22, 2022
October 20, 2022
June 10, 2022
June 1, 2022
December 28, 2021
December 25, 2021