Incremental Learning
Incremental learning aims to enable machine learning models to continuously acquire new knowledge from sequential data streams without forgetting previously learned information, a challenge known as catastrophic forgetting. Current research focuses on developing algorithms and model architectures, such as those employing knowledge distillation, generative replay, and various regularization techniques, to address this issue across diverse applications like image classification, gesture recognition, and medical image analysis. This field is significant because it moves machine learning closer to human-like continuous learning capabilities, with potential impacts on personalized medicine, robotics, and other areas requiring adaptation to evolving data.
Papers
A Second-Order Perspective on Model Compositionality and Incremental Learning
Angelo Porrello, Lorenzo Bonicelli, Pietro Buzzega, Monica Millunzi, Simone Calderara, Rita Cucchiara
A Classifier-Free Incremental Learning Framework for Scalable Medical Image Segmentation
Xiaoyang Chen, Hao Zheng, Yifang Xie, Yuncong Ma, Tengfei Li
Less is more: Summarizing Patch Tokens for efficient Multi-Label Class-Incremental Learning
Thomas De Min, Massimiliano Mancini, Stéphane Lathuilière, Subhankar Roy, Elisa Ricci
Rethinking Class-Incremental Learning from a Dynamic Imbalanced Learning Perspective
Leyuan Wang, Liuyu Xiang, Yunlong Wang, Huijia Wu, Zhaofeng He