Anti Forgetting
"Anti-forgetting" in machine learning focuses on mitigating the tendency of models, particularly deep neural networks and large language models, to lose previously learned information when acquiring new knowledge. Current research emphasizes techniques like experience replay, regularization methods, and novel optimization algorithms (e.g., momentum-filtered optimizers) to improve knowledge retention across various tasks and datasets, often within continual learning or machine unlearning frameworks. This field is crucial for developing more robust and adaptable AI systems, impacting areas like robotics, personalized medicine, and natural language processing by enabling lifelong learning and efficient knowledge management.
Papers
Forget Vectors at Play: Universal Input Perturbations Driving Machine Unlearning in Image Classification
Changchang Sun, Ren Wang, Yihua Zhang, Jinghan Jia, Jiancheng Liu, Gaowen Liu, Sijia Liu, Yan Yan
Chained Tuning Leads to Biased Forgetting
Megan Ung, Alicia Sun, Samuel J. Bell, Bhaktipriya Radharapu, Levent Sagun, Adina Williams
MOS: Model Surgery for Pre-Trained Model-Based Class-Incremental Learning
Hai-Long Sun, Da-Wei Zhou, Hanbin Zhao, Le Gan, De-Chuan Zhan, Han-Jia Ye
DASK: Distribution Rehearsing via Adaptive Style Kernel Learning for Exemplar-Free Lifelong Person Re-Identification
Kunlun Xu, Chenghao Jiang, Peixi Xiong, Yuxin Peng, Jiahuan Zhou
MoSLD: An Extremely Parameter-Efficient Mixture-of-Shared LoRAs for Multi-Task Learning
Lulu Zhao, Weihao Zeng, Xiaofeng Shi, Hua Zhou