Incremental Learning
Incremental learning aims to enable machine learning models to continuously acquire new knowledge from sequential data streams without forgetting previously learned information, a challenge known as catastrophic forgetting. Current research focuses on developing algorithms and model architectures, such as those employing knowledge distillation, generative replay, and various regularization techniques, to address this issue across diverse applications like image classification, gesture recognition, and medical image analysis. This field is significant because it moves machine learning closer to human-like continuous learning capabilities, with potential impacts on personalized medicine, robotics, and other areas requiring adaptation to evolving data.
Papers
Practical Insights on Incremental Learning of New Human Physical Activity on the Edge
George Arvanitakis, Jingwei Zuo, Mthandazo Ndhlovu, Hakim Hacid
An Analysis of Initial Training Strategies for Exemplar-Free Class-Incremental Learning
Grégoire Petit, Michael Soumm, Eva Feillet, Adrian Popescu, Bertrand Delezoide, David Picard, Céline Hudelot
Exemplar-Free Continual Transformer with Convolutions
Anurag Roy, Vinay Kumar Verma, Sravan Voonna, Kripabandhu Ghosh, Saptarshi Ghosh, Abir Das
Audio-Visual Class-Incremental Learning
Weiguo Pian, Shentong Mo, Yunhui Guo, Yapeng Tian
MetaGCD: Learning to Continually Learn in Generalized Category Discovery
Yanan Wu, Zhixiang Chi, Yang Wang, Songhe Feng
When Prompt-based Incremental Learning Does Not Meet Strong Pretraining
Yu-Ming Tang, Yi-Xing Peng, Wei-Shi Zheng