Catastrophic Forgetting
Catastrophic forgetting describes the phenomenon where artificial neural networks, upon learning new tasks, lose previously acquired knowledge. Current research focuses on mitigating this issue through various strategies, including parameter-efficient fine-tuning methods (like LoRA), generative model-based data replay, and novel optimization algorithms that constrain gradient updates or leverage hierarchical task structures. Addressing catastrophic forgetting is crucial for developing robust and adaptable AI systems capable of continuous learning in real-world applications, particularly in domains like medical imaging, robotics, and natural language processing where data streams are constantly evolving.
Papers
Continual Learning with Neuromorphic Computing: Theories, Methods, and Applications
Mishal Fatima Minhas, Rachmad Vidya Wicaksana Putra, Falah Awwad, Osman Hasan, Muhammad Shafique
Lifelong Event Detection via Optimal Transport
Viet Dao, Van-Cuong Pham, Quyen Tran, Thanh-Thien Le, Linh Ngo Van, Thien Huu Nguyen
DOTA: Distributional Test-Time Adaptation of Vision-Language Models
Zongbo Han, Jialong Yang, Junfan Li, Qinghua Hu, Qianli Xu, Mike Zheng Shou, Changqing Zhang
Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning
Xinrui Wang, Chuanxing Geng, Wenhai Wan, Shaoyuan Li, Songcan Chen