Privacy Risk
Privacy risk in artificial intelligence, particularly concerning large language models (LLMs) and federated learning systems, is a critical area of research focusing on identifying and mitigating vulnerabilities that expose sensitive data. Current research emphasizes membership inference attacks—assessing whether specific data points were used in model training—and data reconstruction attacks, which aim to recover original data from model outputs or intermediate representations. These efforts are crucial for developing secure and trustworthy AI systems, impacting both the responsible deployment of AI technologies and the protection of individual privacy in various applications, including healthcare and finance.
Papers
February 3, 2024
December 29, 2023
December 11, 2023
December 8, 2023
December 6, 2023
November 16, 2023
November 11, 2023
November 6, 2023
October 28, 2023
October 24, 2023
October 22, 2023
October 20, 2023
October 16, 2023
October 11, 2023
September 30, 2023
September 27, 2023
September 20, 2023
August 27, 2023
August 18, 2023
July 21, 2023