Knowledge Unlearning
Knowledge unlearning aims to selectively remove specific information from machine learning models, particularly large language models (LLMs), without requiring complete retraining. Current research focuses on developing effective algorithms for this task, often employing techniques like gradient-based methods, knowledge distillation, and model inversion, applied to various architectures including LLMs and graph neural networks. This field is crucial for addressing privacy concerns, complying with data protection regulations (like GDPR), and enhancing the safety and trustworthiness of AI systems by mitigating the risk of sensitive data leakage or harmful outputs.
Papers
November 27, 2023
October 6, 2023
September 28, 2023
October 4, 2022
June 18, 2022
May 31, 2022