Gradient Inversion Attack

Gradient inversion attacks exploit the gradients shared during federated learning (FL) to reconstruct sensitive training data held by individual clients, undermining the privacy benefits of FL. Current research focuses on improving attack efficacy through techniques like neural architecture search and incorporating strong image priors, while also exploring defenses such as gradient pruning, variational modeling, and secure aggregation methods. The ongoing investigation into these attacks and defenses is crucial for establishing the practical security and privacy guarantees of FL systems, particularly in sensitive applications like healthcare and finance.

Papers