Machine Unlearning
Machine unlearning aims to selectively remove the influence of specific data points from a trained machine learning model, addressing privacy concerns and the "right to be forgotten." Current research focuses on improving the accuracy and efficiency of unlearning algorithms for various model architectures, including deep neural networks, random forests, and generative models like diffusion models and large language models, often employing techniques like fine-tuning, gradient-based methods, and adversarial training. This field is crucial for ensuring responsible AI development and deployment, particularly in sensitive domains like healthcare and finance, where data privacy is paramount. The development of robust and efficient unlearning methods is essential for balancing the benefits of machine learning with ethical considerations and legal requirements.
Papers
Federated Learning with Blockchain-Enhanced Machine Unlearning: A Trustworthy Approach
Xuhan Zuo, Minghao Wang, Tianqing Zhu, Lefeng Zhang, Shui Yu, Wanlei Zhou
Exploring Fairness in Educational Data Mining in the Context of the Right to be Forgotten
Wei Qian, Aobo Chen, Chenxu Zhao, Yangyi Li, Mengdi Huai
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning
Wenhan Chang, Tianqing Zhu, Heng Xu, Wenjian Liu, Wanlei Zhou
Erase to Enhance: Data-Efficient Machine Unlearning in MRI Reconstruction
Yuyang Xue, Jingshuai Liu, Steven McDonagh, Sotirios A. Tsaftaris
Towards Natural Machine Unlearning
Zhengbao He, Tao Li, Xinwen Cheng, Zhehao Huang, Xiaolin Huang
Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient
Yongliang Wu, Shiji Zhou, Mingzhuo Yang, Lianzhe Wang, Wenbo Zhu, Heng Chang, Xiao Zhou, Xu Yang
Machine Unlearning in Large Language Models
Saaketh Koundinya Gundavarapu, Shreya Agarwal, Arushi Arora, Chandana Thimmalapura Jagadeeshaiah