Machine Unlearning
Machine unlearning aims to selectively remove the influence of specific data points from a trained machine learning model, addressing privacy concerns and the "right to be forgotten." Current research focuses on improving the accuracy and efficiency of unlearning algorithms for various model architectures, including deep neural networks, random forests, and generative models like diffusion models and large language models, often employing techniques like fine-tuning, gradient-based methods, and adversarial training. This field is crucial for ensuring responsible AI development and deployment, particularly in sensitive domains like healthcare and finance, where data privacy is paramount. The development of robust and efficient unlearning methods is essential for balancing the benefits of machine learning with ethical considerations and legal requirements.
Papers
Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate
Zhiqi Bu, Xiaomeng Jin, Bhanukiran Vinzamuri, Anil Ramakrishna, Kai-Wei Chang, Volkan Cevher, Mingyi Hong
Machine Unlearning using Forgetting Neural Networks
Amartya Hatua, Trung T. Nguyen, Filip Cano, Andrew H. Sung