Catastrophic Forgetting
Catastrophic forgetting describes the phenomenon where artificial neural networks, upon learning new tasks, lose previously acquired knowledge. Current research focuses on mitigating this issue through various strategies, including parameter-efficient fine-tuning methods (like LoRA), generative model-based data replay, and novel optimization algorithms that constrain gradient updates or leverage hierarchical task structures. Addressing catastrophic forgetting is crucial for developing robust and adaptable AI systems capable of continuous learning in real-world applications, particularly in domains like medical imaging, robotics, and natural language processing where data streams are constantly evolving.
Papers
Out-of-distribution forgetting: vulnerability of continual learning to intra-class distribution shift
Liangxuan Guo, Yang Chen, Shan Yu
Teacher Agent: A Knowledge Distillation-Free Framework for Rehearsal-based Video Incremental Learning
Shengqin Jiang, Yaoyu Fang, Haokui Zhang, Qingshan Liu, Yuankai Qi, Yang Yang, Peng Wang
SketchOGD: Memory-Efficient Continual Learning
Benjamin Wright, Youngjae Min, Jeremy Bernstein, Navid Azizan
Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning
Genta Indra Winata, Lingjue Xie, Karthik Radhakrishnan, Shijie Wu, Xisen Jin, Pengxiang Cheng, Mayank Kulkarni, Daniel Preotiuc-Pietro
Condensed Prototype Replay for Class Incremental Learning
Jiangtao Kong, Zhenyu Zong, Tianyi Zhou, Huajie Shao