Data Leakage
Data leakage in machine learning refers to the unintended exposure of sensitive information from training data through model outputs or intermediate computations. Current research focuses on detecting and mitigating leakage in various contexts, including federated learning, large language models, and recommendation systems, employing techniques like differential privacy and adversarial training to enhance privacy while maintaining model accuracy. Understanding and addressing data leakage is crucial for ensuring the responsible development and deployment of machine learning systems, particularly in sensitive domains like healthcare and finance, and for establishing reliable benchmarks for model evaluation.
Papers
November 13, 2024
November 11, 2024
November 6, 2024
October 29, 2024
October 14, 2024
October 11, 2024
October 8, 2024
September 28, 2024
September 7, 2024
September 3, 2024
August 20, 2024
July 27, 2024
July 16, 2024
July 10, 2024
July 2, 2024
June 20, 2024
June 10, 2024
June 3, 2024
May 31, 2024