DP Guarantee
Differential privacy (DP) guarantees aim to mathematically bound the risk of an individual's data being inferred from a differentially private algorithm's output. Current research focuses on refining DP bounds for various mechanisms, including those involving stochasticity from sampling, shuffling, and random initialization, often leveraging frameworks like $f$-differential privacy and Rényi differential privacy for tighter analyses. This work is crucial for advancing the development of privacy-preserving machine learning algorithms, particularly in federated learning settings and applications requiring robust statistics, where improved DP guarantees enhance both privacy protection and model utility.
Papers
January 17, 2024
October 30, 2023
October 16, 2023
August 23, 2023
February 19, 2023
September 30, 2022