Catastrophic Forgetting
Catastrophic forgetting describes the phenomenon where artificial neural networks, upon learning new tasks, lose previously acquired knowledge. Current research focuses on mitigating this issue through various strategies, including parameter-efficient fine-tuning methods (like LoRA), generative model-based data replay, and novel optimization algorithms that constrain gradient updates or leverage hierarchical task structures. Addressing catastrophic forgetting is crucial for developing robust and adaptable AI systems capable of continuous learning in real-world applications, particularly in domains like medical imaging, robotics, and natural language processing where data streams are constantly evolving.
Papers
Soup to go: mitigating forgetting during continual learning with model averaging
Anat Kleiman, Gintare Karolina Dziugaite, Jonathan Frankle, Sham Kakade, Mansheej Paul
LSEBMCL: A Latent Space Energy-Based Model for Continual Learning
Xiaodi Li, Dingcheng Li, Rujun Gao, Mahmoud Zamani, Latifur Khan
Solving the Catastrophic Forgetting Problem in Generalized Category Discovery
Xinzi Cao, Xiawu Zheng, Guanhong Wang, Weijiang Yu, Yunhang Shen, Ke Li, Yutong Lu, Yonghong Tian
Continuous Knowledge-Preserving Decomposition for Few-Shot Continual Learning
Xiaojie Li, Yibo Yang, Jianlong Wu, David A. Clifton, Yue Yu, Bernard Ghanem, Min Zhang
Online Continual Learning: A Systematic Literature Review of Approaches, Challenges, and Benchmarks
Seyed Amir Bidaki, Amir Mohammadkhah, Kiyan Rezaee, Faeze Hassani, Sadegh Eskandari, Maziar Salahi, Mohammad M. Ghassemi