Continual Learning
Continual learning aims to enable artificial intelligence models to learn new tasks sequentially without forgetting previously acquired knowledge, mirroring human learning capabilities. Current research focuses on mitigating "catastrophic forgetting" through techniques like experience replay, regularization, parameter isolation, and the use of parameter-efficient fine-tuning methods such as Low-Rank Adaptation (LoRA) and prompt tuning within various architectures including transformers and convolutional neural networks. This field is crucial for developing robust and adaptable AI systems across diverse applications, from autonomous driving and robotics to medical image analysis and personalized education, where continuous adaptation to new data is essential.
Papers
Towards Balanced Continual Multi-Modal Learning in Human Pose Estimation
Jiaxuan Peng, Mengshi Qi, Dong Zhao, Huadong Ma
Online Continual Learning: A Systematic Literature Review of Approaches, Challenges, and Benchmarks
Seyed Amir Bidaki, Amir Mohammadkhah, Kiyan Rezaee, Faeze Hassani, Sadegh Eskandari, Maziar Salahi, Mohammad M. Ghassemi
Never Reset Again: A Mathematical Framework for Continual Inference in Recurrent Neural Networks
Bojian Yin, Federico Corradi
MR-GDINO: Efficient Open-World Continual Object Detection
Bowen Dong, Zitong Huang, Guanglei Yang, Lei Zhang, Wangmeng Zuo
Continual Learning with Strategic Selection and Forgetting for Network Intrusion Detection
Xinchen Zhang, Running Zhao, Zhihan Jiang, Handi Chen, Yulong Ding, Edith C.H. Ngai, Shuang-Hua Yang
Continual Learning Using Only Large Language Model Prompting
Jiabao Qiu, Zixuan Ke, Bing Liu
TinySubNets: An efficient and low capacity continual learning strategy
Marcin Pietroń, Kamil Faber, Dominik Żurek, Roberto Corizzo
SegACIL: Solving the Stability-Plasticity Dilemma in Class-Incremental Semantic Segmentation
Jiaxu Li, Songning Lai, Rui Li, Di Fang, Kejia Fan, Jianheng Tang, Yuhan Zhao, Rongchang Zhao, Dongzhan Zhou, Yutao Yue, Huiping Zhuang
CMT: A Memory Compression Method for Continual Knowledge Learning of Large Language Models
Dongfang Li, Zetian Sun, Xinshuo Hu, Baotian Hu, Min Zhang
Filling Memory Gaps: Enhancing Continual Semantic Parsing via SQL Syntax Variance-Guided LLMs without Real Data Replay
Ruiheng Liu, Jinyu Zhang, Yanqi Song, Yu Zhang, Bailong Yang