Data Leakage
Data leakage in machine learning refers to the unintended exposure of sensitive information from training data through model outputs or intermediate computations. Current research focuses on detecting and mitigating leakage in various contexts, including federated learning, large language models, and recommendation systems, employing techniques like differential privacy and adversarial training to enhance privacy while maintaining model accuracy. Understanding and addressing data leakage is crucial for ensuring the responsible development and deployment of machine learning systems, particularly in sensitive domains like healthcare and finance, and for establishing reliable benchmarks for model evaluation.
Papers
April 16, 2024
April 13, 2024
February 13, 2024
February 10, 2024
February 3, 2024
January 25, 2024
January 24, 2024
December 21, 2023
December 11, 2023
October 31, 2023
October 25, 2023
October 10, 2023
September 20, 2023
September 1, 2023
August 29, 2023
August 28, 2023
July 25, 2023
July 19, 2023
July 18, 2023