Harmful Unlearning
Harmful unlearning, also known as machine unlearning, aims to remove specific data or knowledge from trained machine learning models, particularly large language models (LLMs), without complete retraining. Current research focuses on developing effective unlearning algorithms, often employing techniques like gradient-based methods, knowledge distillation, and adversarial training, across various model architectures including LLMs and diffusion models. This field is crucial for addressing privacy concerns, mitigating biases, and enhancing the safety and robustness of AI systems, impacting both data protection regulations and the trustworthiness of AI applications.
Papers
PISTOL: Dataset Compilation Pipeline for Structural Unlearning of LLMs
Xinchi Qiu, William F. Shen, Yihong Chen, Nicola Cancedda, Pontus Stenetorp, Nicholas D. Lane
Machine Unlearning with Minimal Gradient Dependence for High Unlearning Ratios
Tao Huang, Ziyang Chen, Jiayang Meng, Qingyu Huang, Xu Yang, Xun Yi, Ibrahim Khalil
Every Language Counts: Learn and Unlearn in Multilingual LLMs
Taiming Lu, Philipp Koehn
Certification for Differentially Private Prediction in Gradient-Based Training
Matthew Wicker, Philip Sosnin, Igor Shilov, Adrianna Janik, Mark N. Müller, Yves-Alexandre de Montjoye, Adrian Weller, Calvin Tsay
Jogging the Memory of Unlearned LLMs Through Targeted Relearning Attacks
Shengyuan Hu, Yiwei Fu, Zhiwei Steven Wu, Virginia Smith
Textual Unlearning Gives a False Sense of Unlearning
Jiacheng Du, Zhibo Wang, Kui Ren
Soft Prompting for Unlearning in Large Language Models
Karuna Bhaila, Minh-Hao Van, Xintao Wu
Split, Unlearn, Merge: Leveraging Data Attributes for More Effective Unlearning in LLMs
Swanand Ravindra Kadhe, Farhan Ahmed, Dennis Wei, Nathalie Baracaldo, Inkit Padhi
Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces
Yihuai Hong, Lei Yu, Shauli Ravfogel, Haiqin Yang, Mor Geva