Differential Privacy Noise
Differential privacy noise is added to data or model parameters during machine learning to protect individual privacy while preserving overall data utility. Current research focuses on mitigating the negative impact of this noise on model accuracy through techniques like low-pass filtering of gradients, post-training noise adjustment, and adaptive noise injection strategies tailored to specific algorithms (e.g., stochastic gradient descent, Bayesian methods). These advancements aim to improve the privacy-utility trade-off in various applications, including federated learning and distributed systems, by enabling more accurate models while maintaining strong privacy guarantees.
Papers
November 5, 2024
August 24, 2024
June 27, 2024
May 4, 2024
February 9, 2024
January 2, 2024
July 1, 2023
May 10, 2023
April 4, 2023
February 24, 2023
January 31, 2023
January 19, 2023
November 14, 2022
October 22, 2022
August 18, 2022