Privacy Risk
Privacy risk in artificial intelligence, particularly concerning large language models (LLMs) and federated learning systems, is a critical area of research focusing on identifying and mitigating vulnerabilities that expose sensitive data. Current research emphasizes membership inference attacks—assessing whether specific data points were used in model training—and data reconstruction attacks, which aim to recover original data from model outputs or intermediate representations. These efforts are crucial for developing secure and trustworthy AI systems, impacting both the responsible deployment of AI technologies and the protection of individual privacy in various applications, including healthcare and finance.
Papers
Data Plagiarism Index: Characterizing the Privacy Risk of Data-Copying in Tabular Generative Models
Joshua Ward, Chi-Hua Wang, Guang Cheng
UIFV: Data Reconstruction Attack in Vertical Federated Learning
Jirui Yang, Peng Chen, Zhihui Lu, Qiang Duan, Yubing Bao
PFID: Privacy First Inference Delegation Framework for LLMs
Haoyan Yang, Zhitao Li, Yong Zhang, Jianzong Wang, Ning Cheng, Ming Li, Jing Xiao
A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures
Thanh Tam Nguyen, Thanh Trung Huynh, Zhao Ren, Thanh Toan Nguyen, Phi Le Nguyen, Hongzhi Yin, Quoc Viet Hung Nguyen
AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight
Nicola Fabiano