Privacy Model
Privacy models aim to safeguard sensitive data used in machine learning and data analysis while preserving data utility. Current research focuses on developing and auditing differentially private algorithms, including those for clustering, and exploring alternative models like shuffle privacy and local differential privacy, with a strong emphasis on mitigating privacy risks in federated learning and other distributed settings. These advancements are crucial for enabling responsible data utilization in various applications, balancing the need for data-driven insights with robust privacy protections.
Papers
June 17, 2024
June 7, 2024
May 6, 2024
May 2, 2024
March 9, 2024
June 20, 2023
May 29, 2023
April 16, 2023
November 19, 2022
October 27, 2022
October 22, 2022
September 4, 2022
May 27, 2022
February 22, 2022