Differential Privacy Constraint
Differential privacy constraints aim to protect sensitive data used in machine learning and statistical analysis by adding carefully calibrated noise to algorithms' outputs, guaranteeing a level of privacy while preserving utility. Current research focuses on optimizing algorithms for various tasks under these constraints, including minimum spanning tree construction, federated learning (with heterogeneous privacy levels across distributed data), and hypothesis testing, often employing techniques like Report-Noisy-Max and Determinantal Point Processes. This field is crucial for enabling responsible data analysis and model training in privacy-sensitive applications, driving improvements in algorithm efficiency and accuracy while maintaining strong privacy guarantees.
Papers
Optimal Federated Learning for Nonparametric Regression with Heterogeneous Distributed Differential Privacy Constraints
T. Tony Cai, Abhinav Chakraborty, Lasse Vuursteen
Federated Nonparametric Hypothesis Testing with Differential Privacy Constraints: Optimal Rates and Adaptive Tests
T. Tony Cai, Abhinav Chakraborty, Lasse Vuursteen