Privacy Risk
Privacy risk in artificial intelligence, particularly concerning large language models (LLMs) and federated learning systems, is a critical area of research focusing on identifying and mitigating vulnerabilities that expose sensitive data. Current research emphasizes membership inference attacks—assessing whether specific data points were used in model training—and data reconstruction attacks, which aim to recover original data from model outputs or intermediate representations. These efforts are crucial for developing secure and trustworthy AI systems, impacting both the responsible deployment of AI technologies and the protection of individual privacy in various applications, including healthcare and finance.
Papers
November 12, 2022
November 10, 2022
October 20, 2022
October 4, 2022
September 15, 2022
September 14, 2022
September 4, 2022
July 26, 2022
July 25, 2022
July 2, 2022
June 28, 2022
May 25, 2022
May 20, 2022
May 13, 2022
May 9, 2022
March 8, 2022
February 18, 2022
February 16, 2022