Anti Forgetting
"Anti-forgetting" in machine learning focuses on mitigating the tendency of models, particularly deep neural networks and large language models, to lose previously learned information when acquiring new knowledge. Current research emphasizes techniques like experience replay, regularization methods, and novel optimization algorithms (e.g., momentum-filtered optimizers) to improve knowledge retention across various tasks and datasets, often within continual learning or machine unlearning frameworks. This field is crucial for developing more robust and adaptable AI systems, impacting areas like robotics, personalized medicine, and natural language processing by enabling lifelong learning and efficient knowledge management.
Papers
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference
Jiabao Ji, Yujian Liu, Yang Zhang, Gaowen Liu, Ramana Rao Kompella, Sijia Liu, Shiyu Chang
Decoupling the Class Label and the Target Concept in Machine Unlearning
Jianing Zhu, Bo Han, Jiangchao Yao, Jianliang Xu, Gang Niu, Masashi Sugiyama