User Level Differential Privacy
User-level differential privacy (DP) aims to protect the privacy of all data contributed by an individual user in machine learning, a stronger guarantee than protecting individual data points. Current research focuses on developing and analyzing algorithms for achieving user-level DP in various settings, including federated learning and large language model fine-tuning, often employing techniques like per-user gradient clipping and adaptive noise injection. This area is crucial for ensuring robust privacy in collaborative machine learning and data analysis, particularly when dealing with sensitive personal information, and is driving the development of new privacy-preserving algorithms and theoretical frameworks.
Papers
October 24, 2024
October 12, 2024
October 11, 2024
July 10, 2024
June 20, 2024
May 27, 2024
May 22, 2024
March 8, 2024
September 21, 2023
August 23, 2023
July 28, 2023
June 8, 2023
May 20, 2023
May 8, 2023
February 15, 2023
November 20, 2022
November 7, 2022
November 1, 2022
June 7, 2022