Anti Forgetting
"Anti-forgetting" in machine learning focuses on mitigating the tendency of models, particularly deep neural networks and large language models, to lose previously learned information when acquiring new knowledge. Current research emphasizes techniques like experience replay, regularization methods, and novel optimization algorithms (e.g., momentum-filtered optimizers) to improve knowledge retention across various tasks and datasets, often within continual learning or machine unlearning frameworks. This field is crucial for developing more robust and adaptable AI systems, impacting areas like robotics, personalized medicine, and natural language processing by enabling lifelong learning and efficient knowledge management.
Papers
LoRA Learns Less and Forgets Less
Dan Biderman, Jacob Portes, Jose Javier Gonzalez Ortiz, Mansheej Paul, Philip Greengard, Connor Jennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, John P. Cunningham
Overcoming Domain Drift in Online Continual Learning
Fan Lyu, Daofeng Liu, Linglan Zhao, Zhang Zhang, Fanhua Shang, Fuyuan Hu, Wei Feng, Liang Wang
Larimar: Large Language Models with Episodic Memory Control
Payel Das, Subhajit Chaudhury, Elliot Nelson, Igor Melnyk, Sarath Swaminathan, Sihui Dai, Aurélie Lozano, Georgios Kollias, Vijil Chenthamarakshan, Jiří, Navrátil, Soham Dan, Pin-Yu Chen
Continual Forgetting for Pre-trained Vision Models
Hongbo Zhao, Bolin Ni, Haochen Wang, Junsong Fan, Fei Zhu, Yuxi Wang, Yuntao Chen, Gaofeng Meng, Zhaoxiang Zhang
Learning to better see the unseen: Broad-Deep Mixed Anti-Forgetting Framework for Incremental Zero-Shot Fault Diagnosis
Jiancheng Zhao, Jiaqi Yue, Chunhui Zhao