Privacy Amplification

Privacy amplification techniques aim to enhance the privacy guarantees of differentially private algorithms by strategically introducing randomness, primarily through subsampling and shuffling. Current research focuses on refining privacy accounting methods for various algorithms like DP-SGD and DP-FTRL, investigating the impact of correlated noise and exploring the interplay between privacy amplification and model architectures like matrix factorization. These advancements are crucial for enabling the responsible use of private data in machine learning, particularly in federated learning and other distributed settings, by improving the privacy-utility trade-off.

Papers