Privacy Evaluation

Privacy evaluation in the context of machine learning, particularly with large language models (LLMs) and generative models, focuses on developing robust methods to assess and mitigate privacy risks while preserving data utility. Current research emphasizes developing and benchmarking new evaluation metrics, including those aligned with human perception, and improving anonymization techniques that balance privacy protection with the usefulness of anonymized data. This is crucial for responsible development and deployment of AI systems, ensuring compliance with data protection regulations and safeguarding user privacy in various applications.

Papers