Approximate Differential Privacy

Approximate differential privacy (ADP) aims to balance the release of useful information from datasets with strong privacy guarantees, relaxing the strict constraints of pure differential privacy. Current research focuses on developing efficient algorithms for various tasks, including mean estimation, convex optimization, and model selection, often employing techniques like the exponential mechanism, Gaussian noise addition, and Markov Chain Monte Carlo methods to achieve optimal error rates under ADP. These advancements are crucial for enabling the responsible use of sensitive data in machine learning and statistical analysis, addressing the trade-off between utility and privacy in practical applications. The development of tighter privacy bounds and computationally efficient algorithms remains a key focus.

Papers