Unlearned Model
Machine unlearning aims to remove specific data or features from trained machine learning models, primarily to address privacy concerns and mitigate harmful biases. Current research focuses on developing effective unlearning algorithms for various model architectures, including large language models (LLMs), diffusion models, and graph neural networks, often employing techniques like gradient ascent, knowledge distillation, and generative adversarial networks. However, significant challenges remain, including the vulnerability of unlearned models to adversarial attacks and the difficulty in achieving complete data removal without substantially degrading model performance on retained data. Addressing these challenges is crucial for responsible deployment of machine learning systems and compliance with data privacy regulations.
Papers
Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience
Thanh Trung Huynh, Trong Bang Nguyen, Phi Le Nguyen, Thanh Tam Nguyen, Matthias Weidlich, Quoc Viet Hung Nguyen, Karl Aberer
An Information Theoretic Evaluation Metric For Strong Unlearning
Dongjae Jeon, Wonje Jeung, Taeheon Kim, Albert No, Jonghyun Choi