Machine Unlearning
Machine unlearning aims to selectively remove the influence of specific data points from a trained machine learning model, addressing privacy concerns and the "right to be forgotten." Current research focuses on improving the accuracy and efficiency of unlearning algorithms for various model architectures, including deep neural networks, random forests, and generative models like diffusion models and large language models, often employing techniques like fine-tuning, gradient-based methods, and adversarial training. This field is crucial for ensuring responsible AI development and deployment, particularly in sensitive domains like healthcare and finance, where data privacy is paramount. The development of robust and efficient unlearning methods is essential for balancing the benefits of machine learning with ethical considerations and legal requirements.
Papers
Towards Independence Criterion in Machine Unlearning of Features and Labels
Ling Han, Nanqing Luo, Hao Huang, Jing Chen, Mary-Anne Hartley
Efficient Knowledge Deletion from Trained Models through Layer-wise Partial Machine Unlearning
Vinay Chakravarthi Gogineni, Esmaeil S. Nadimi
Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning
Chongyu Fan, Jiancheng Liu, Alfred Hero, Sijia Liu