Gradient Inversion Attack
Gradient inversion attacks exploit the gradients shared during federated learning (FL) to reconstruct sensitive training data held by individual clients, undermining the privacy benefits of FL. Current research focuses on improving attack efficacy through techniques like neural architecture search and incorporating strong image priors, while also exploring defenses such as gradient pruning, variational modeling, and secure aggregation methods. The ongoing investigation into these attacks and defenses is crucial for establishing the practical security and privacy guarantees of FL systems, particularly in sensitive applications like healthcare and finance.
Papers
October 28, 2024
October 21, 2024
September 26, 2024
September 12, 2024
September 11, 2024
June 22, 2024
June 3, 2024
May 31, 2024
May 30, 2024
May 24, 2024
May 16, 2024
May 6, 2024
April 8, 2024
March 24, 2024
March 13, 2024
March 6, 2024
February 13, 2024
February 5, 2024
January 30, 2024