Differential Privacy Mechanism
Differential privacy mechanisms aim to add carefully calibrated noise to data or model outputs, enabling data analysis and machine learning while guaranteeing provable privacy bounds. Current research focuses on optimizing the trade-off between privacy preservation and utility across diverse applications, including federated learning, neural networks (e.g., Neural Tangent Kernel regression), and various data types (e.g., spatiotemporal, image, and tabular data). This active research area is crucial for responsible data sharing and deployment of AI systems, particularly in sensitive domains like healthcare and finance, by mitigating privacy risks associated with data analysis and model training.
Papers
October 11, 2024
July 20, 2024
July 18, 2024
July 3, 2024
June 15, 2024
June 1, 2024
September 19, 2023
June 5, 2023
May 19, 2023
May 9, 2023
March 16, 2023
March 7, 2023
January 28, 2023
January 20, 2023
November 1, 2022
October 2, 2022
September 4, 2022
June 25, 2022