Machine Unlearning
Machine unlearning aims to selectively remove the influence of specific data points from a trained machine learning model, addressing privacy concerns and the "right to be forgotten." Current research focuses on improving the accuracy and efficiency of unlearning algorithms for various model architectures, including deep neural networks, random forests, and generative models like diffusion models and large language models, often employing techniques like fine-tuning, gradient-based methods, and adversarial training. This field is crucial for ensuring responsible AI development and deployment, particularly in sensitive domains like healthcare and finance, where data privacy is paramount. The development of robust and efficient unlearning methods is essential for balancing the benefits of machine learning with ethical considerations and legal requirements.
Papers
Verification of Machine Unlearning is Fragile
Binchi Zhang, Zihan Chen, Cong Shen, Jundong Li
Towards Certified Unlearning for Deep Neural Networks
Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li
On the Limitations and Prospects of Machine Unlearning for Generative AI
Shiji Zhou, Lianzhe Wang, Jiangnan Ye, Yongliang Wu, Heng Chang