Machine Unlearning
Machine unlearning aims to selectively remove the influence of specific data points from a trained machine learning model, addressing privacy concerns and the "right to be forgotten." Current research focuses on improving the accuracy and efficiency of unlearning algorithms for various model architectures, including deep neural networks, random forests, and generative models like diffusion models and large language models, often employing techniques like fine-tuning, gradient-based methods, and adversarial training. This field is crucial for ensuring responsible AI development and deployment, particularly in sensitive domains like healthcare and finance, where data privacy is paramount. The development of robust and efficient unlearning methods is essential for balancing the benefits of machine learning with ethical considerations and legal requirements.
Papers
Fast Model Debias with Machine Unlearning
Ruizhe Chen, Jianfei Yang, Huimin Xiong, Jianhong Bai, Tianxiang Hu, Jin Hao, Yang Feng, Joey Tianyi Zhou, Jian Wu, Zuozhu Liu
SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation
Chongyu Fan, Jiancheng Liu, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu