Differential Privacy Noise

Differential privacy noise is added to data or model parameters during machine learning to protect individual privacy while preserving overall data utility. Current research focuses on mitigating the negative impact of this noise on model accuracy through techniques like low-pass filtering of gradients, post-training noise adjustment, and adaptive noise injection strategies tailored to specific algorithms (e.g., stochastic gradient descent, Bayesian methods). These advancements aim to improve the privacy-utility trade-off in various applications, including federated learning and distributed systems, by enabling more accurate models while maintaining strong privacy guarantees.

Papers