Label Leakage
Label leakage refers to the unintended exposure of sensitive training labels during the process of machine learning model training, particularly in privacy-preserving techniques like federated learning. Current research focuses on identifying and mitigating label leakage vulnerabilities in various model architectures, including tree-based models and those employing secure aggregation, through methods such as differential privacy and regularization techniques. Understanding and addressing label leakage is crucial for ensuring the privacy and security of sensitive data used in machine learning, impacting the development of trustworthy and ethically sound AI systems across diverse applications.
Papers
October 14, 2024
October 11, 2024
June 22, 2024
July 19, 2023
February 24, 2023
March 2, 2022
December 16, 2021