Privacy Evaluation
Privacy evaluation in the context of machine learning, particularly with large language models (LLMs) and generative models, focuses on developing robust methods to assess and mitigate privacy risks while preserving data utility. Current research emphasizes developing and benchmarking new evaluation metrics, including those aligned with human perception, and improving anonymization techniques that balance privacy protection with the usefulness of anonymized data. This is crucial for responsible development and deployment of AI systems, ensuring compliance with data protection regulations and safeguarding user privacy in various applications.
Papers
November 19, 2024
September 4, 2024
July 16, 2024
June 20, 2024
April 24, 2024
September 22, 2023