Differential Privacy Mechanism
Differential privacy mechanisms aim to add carefully calibrated noise to data or model outputs, enabling data analysis and machine learning while guaranteeing provable privacy bounds. Current research focuses on optimizing the trade-off between privacy preservation and utility across diverse applications, including federated learning, neural networks (e.g., Neural Tangent Kernel regression), and various data types (e.g., spatiotemporal, image, and tabular data). This active research area is crucial for responsible data sharing and deployment of AI systems, particularly in sensitive domains like healthcare and finance, by mitigating privacy risks associated with data analysis and model training.
Papers
May 25, 2022