Data Reconstruction Attack
Data reconstruction attacks exploit information leaked during machine learning processes, particularly in federated learning (FL) and vertical federated learning (VFL), to reconstruct the private training data of participating clients. Current research focuses on developing and analyzing these attacks, often leveraging techniques like gradient inversion, linear layer leakage, and diffusion models, across various model architectures including neural networks. Understanding and mitigating these attacks is crucial for ensuring the privacy of sensitive data used in collaborative machine learning, impacting the security and trustworthiness of numerous applications in finance, e-commerce, and beyond.
Papers
June 8, 2022
February 6, 2022
December 6, 2021