Privacy Risk
Privacy risk in artificial intelligence, particularly concerning large language models (LLMs) and federated learning systems, is a critical area of research focusing on identifying and mitigating vulnerabilities that expose sensitive data. Current research emphasizes membership inference attacks—assessing whether specific data points were used in model training—and data reconstruction attacks, which aim to recover original data from model outputs or intermediate representations. These efforts are crucial for developing secure and trustworthy AI systems, impacting both the responsible deployment of AI technologies and the protection of individual privacy in various applications, including healthcare and finance.
Papers
On the Privacy Risk of In-context Learning
Haonan Duan, Adam Dziedzic, Mohammad Yaghini, Nicolas Papernot, Franziska Boenisch
mmSpyVR: Exploiting mmWave Radar for Penetrating Obstacles to Uncover Privacy Vulnerability of Virtual Reality
Luoyu Mei, Ruofeng Liu, Zhimeng Yin, Qingchuan Zhao, Wenchao Jiang, Shuai Wang, Kangjie Lu, Tian He