Approximate Unlearning
Approximate unlearning aims to efficiently remove the influence of specific data points from already-trained machine learning models, addressing privacy concerns and the "right to be forgotten." Current research focuses on developing and evaluating algorithms for this task, particularly within the context of large language models and deep learning architectures, exploring techniques like gradient-based methods, Newton's method variations, and dataset condensation. The field's significance lies in its potential to reconcile the benefits of large-scale data training with individual data privacy rights, impacting both the ethical deployment of AI and the development of more privacy-preserving machine learning techniques.
Papers
November 4, 2024
October 14, 2024
October 9, 2024
July 8, 2024
June 25, 2024
June 20, 2024
May 13, 2024
April 19, 2024
March 25, 2024
March 19, 2024
March 13, 2024
January 31, 2024
January 18, 2024
October 3, 2023
July 7, 2023
April 11, 2023
September 25, 2022
July 7, 2022