Gradient Inversion Attack
Gradient inversion attacks exploit the gradients shared during federated learning (FL) to reconstruct sensitive training data held by individual clients, undermining the privacy benefits of FL. Current research focuses on improving attack efficacy through techniques like neural architecture search and incorporating strong image priors, while also exploring defenses such as gradient pruning, variational modeling, and secure aggregation methods. The ongoing investigation into these attacks and defenses is crucial for establishing the practical security and privacy guarantees of FL systems, particularly in sensitive applications like healthcare and finance.
Papers
March 6, 2024
February 13, 2024
February 5, 2024
January 30, 2024
January 22, 2024
December 19, 2023
September 22, 2023
September 14, 2023
September 8, 2023
August 10, 2023
August 9, 2023
June 15, 2023
June 13, 2023
May 31, 2023
April 22, 2023
October 19, 2022
September 12, 2022
August 12, 2022
August 9, 2022