Data Reconstruction Attack
Data reconstruction attacks exploit information leaked during machine learning processes, particularly in federated learning (FL) and vertical federated learning (VFL), to reconstruct the private training data of participating clients. Current research focuses on developing and analyzing these attacks, often leveraging techniques like gradient inversion, linear layer leakage, and diffusion models, across various model architectures including neural networks. Understanding and mitigating these attacks is crucial for ensuring the privacy of sensitive data used in collaborative machine learning, impacting the security and trustworthiness of numerous applications in finance, e-commerce, and beyond.
23papers
Papers
February 7, 2025
November 27, 2024
October 10, 2024
October 7, 2024
September 29, 2024
August 22, 2024
March 26, 2024
March 2, 2024
February 13, 2024
July 8, 2023
June 23, 2023
March 21, 2023
February 2, 2023
December 8, 2022