Deep Leakage
Deep leakage refers to the unintended exposure of sensitive information during machine learning model training and deployment, compromising data privacy and potentially leading to inaccurate or biased results. Current research focuses on identifying and mitigating leakage in various contexts, including federated learning (where gradient and model weight transmission are analyzed), code generation (examining contamination of evaluation datasets), and general machine learning pipelines (addressing issues like confounders and biased data). Understanding and addressing deep leakage is crucial for ensuring the reliability and trustworthiness of machine learning systems across diverse applications, from healthcare to cybersecurity.
Papers
October 25, 2024
October 11, 2024
August 15, 2024
July 10, 2024
November 7, 2023
July 25, 2023
December 15, 2022
October 17, 2022
September 18, 2022
July 14, 2022
July 11, 2022
June 10, 2022
March 24, 2022