LLM Unlearning
LLM unlearning focuses on removing specific information from large language models (LLMs) after training, addressing privacy and safety concerns stemming from memorization of sensitive data. Current research explores various methods, including gradient-based approaches, techniques leveraging "inverted facts" or prompt engineering, and those employing second-order optimization or orthogonal adapters for efficient and targeted unlearning. This field is crucial for responsible LLM deployment, impacting both the ethical development of AI and the practical application of LLMs in sensitive contexts requiring data protection.
Papers
November 18, 2024
October 23, 2024
October 20, 2024
October 14, 2024
September 18, 2024
July 24, 2024
July 14, 2024
June 24, 2024
June 17, 2024
June 16, 2024
June 13, 2024
June 12, 2024
April 28, 2024
April 8, 2024
February 18, 2024
February 13, 2024
October 14, 2023