Catastrophic Forgetting
Catastrophic forgetting describes the phenomenon where artificial neural networks, upon learning new tasks, lose previously acquired knowledge. Current research focuses on mitigating this issue through various strategies, including parameter-efficient fine-tuning methods (like LoRA), generative model-based data replay, and novel optimization algorithms that constrain gradient updates or leverage hierarchical task structures. Addressing catastrophic forgetting is crucial for developing robust and adaptable AI systems capable of continuous learning in real-world applications, particularly in domains like medical imaging, robotics, and natural language processing where data streams are constantly evolving.
Papers
Regularization-Based Efficient Continual Learning in Deep State-Space Models
Yuanhang Zhang, Zhidi Lin, Yiyong Sun, Feng Yin, Carsten Fritsche
Don't Half-listen: Capturing Key-part Information in Continual Instruction Tuning
Yongquan He, Xuancheng Huang, Minghao Tang, Lingxun Meng, Xiang Li, Wei Lin, Wenyuan Zhang, Yifu Gao
Leveraging AI Predicted and Expert Revised Annotations in Interactive Segmentation: Continual Tuning or Full Training?
Tiezheng Zhang, Xiaoxi Chen, Chongyu Qu, Alan Yuille, Zongwei Zhou
Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning
Weijieying Ren, Xinlong Li, Lei Wang, Tianxiang Zhao, Wei Qin