Catastrophic Forgetting
Catastrophic forgetting describes the phenomenon where artificial neural networks, upon learning new tasks, lose previously acquired knowledge. Current research focuses on mitigating this issue through various strategies, including parameter-efficient fine-tuning methods (like LoRA), generative model-based data replay, and novel optimization algorithms that constrain gradient updates or leverage hierarchical task structures. Addressing catastrophic forgetting is crucial for developing robust and adaptable AI systems capable of continuous learning in real-world applications, particularly in domains like medical imaging, robotics, and natural language processing where data streams are constantly evolving.
Papers
Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation
Peiyi Wang, Yifan Song, Tianyu Liu, Binghuai Lin, Yunbo Cao, Sujian Li, Zhifang Sui
A Memory Transformer Network for Incremental Learning
Ahmet Iscen, Thomas Bird, Mathilde Caron, Alireza Fathi, Cordelia Schmid
Continual learning benefits from multiple sleep mechanisms: NREM, REM, and Synaptic Downscaling
Brian S. Robinson, Clare W. Lau, Alexander New, Shane M. Nichols, Erik C. Johnson, Michael Wolmetz, William G. Coon
Selecting Related Knowledge via Efficient Channel Attention for Online Continual Learning
Ya-nan Han, Jian-wei Liu