Data Leakage
Data leakage in machine learning refers to the unintended exposure of sensitive information from training data through model outputs or intermediate computations. Current research focuses on detecting and mitigating leakage in various contexts, including federated learning, large language models, and recommendation systems, employing techniques like differential privacy and adversarial training to enhance privacy while maintaining model accuracy. Understanding and addressing data leakage is crucial for ensuring the responsible development and deployment of machine learning systems, particularly in sensitive domains like healthcare and finance, and for establishing reliable benchmarks for model evaluation.
Papers
July 19, 2023
July 18, 2023
May 15, 2023
March 16, 2023
February 23, 2023
December 12, 2022
November 21, 2022
November 7, 2022
November 1, 2022
October 6, 2022
October 4, 2022
September 21, 2022
August 22, 2022
June 24, 2022
June 7, 2022
May 8, 2022
February 21, 2022
February 14, 2022