Zero Shot Unlearning
Zero-shot unlearning aims to remove specific data or classes from a trained machine learning model without access to the original training data, addressing privacy concerns and regulatory compliance. Current research focuses on developing algorithms that achieve this "forgetting" while minimizing performance degradation on remaining data, employing techniques like Lipschitz regularization, information-theoretic approaches, and sparse representations within various model architectures, including CLIP. This field is crucial for responsible AI development, enabling the removal of sensitive information from models while maintaining their utility, thereby impacting both ethical considerations and practical applications of machine learning.
Papers
October 31, 2024
October 8, 2024
July 10, 2024
February 2, 2024
November 26, 2023
May 31, 2022