Identifiable Information
Personally Identifiable Information (PII) leakage from large language models (LLMs) is a significant concern, prompting research focused on improving its detection, removal, and obfuscation. Current efforts utilize techniques like fine-tuned LLMs, transformer-based architectures, and graph convolutional networks to identify and either remove or transform PII while preserving data utility. These advancements aim to mitigate privacy risks associated with LLMs and improve the responsible development and deployment of these powerful technologies, impacting both data security and the ethical use of AI.
Papers
October 9, 2024
August 16, 2024
July 3, 2024
June 26, 2024
May 16, 2023
May 9, 2022
November 11, 2021