DP Guarantee

Differential privacy (DP) guarantees aim to mathematically bound the risk of an individual's data being inferred from a differentially private algorithm's output. Current research focuses on refining DP bounds for various mechanisms, including those involving stochasticity from sampling, shuffling, and random initialization, often leveraging frameworks like $f$-differential privacy and Rényi differential privacy for tighter analyses. This work is crucial for advancing the development of privacy-preserving machine learning algorithms, particularly in federated learning settings and applications requiring robust statistics, where improved DP guarantees enhance both privacy protection and model utility.

Papers