Zero Shot Unlearning

Zero-shot unlearning aims to remove specific data or classes from a trained machine learning model without access to the original training data, addressing privacy concerns and regulatory compliance. Current research focuses on developing algorithms that achieve this "forgetting" while minimizing performance degradation on remaining data, employing techniques like Lipschitz regularization, information-theoretic approaches, and sparse representations within various model architectures, including CLIP. This field is crucial for responsible AI development, enabling the removal of sensitive information from models while maintaining their utility, thereby impacting both ethical considerations and practical applications of machine learning.

Papers