Incremental Learning
Incremental learning aims to enable machine learning models to continuously acquire new knowledge from sequential data streams without forgetting previously learned information, a challenge known as catastrophic forgetting. Current research focuses on developing algorithms and model architectures, such as those employing knowledge distillation, generative replay, and various regularization techniques, to address this issue across diverse applications like image classification, gesture recognition, and medical image analysis. This field is significant because it moves machine learning closer to human-like continuous learning capabilities, with potential impacts on personalized medicine, robotics, and other areas requiring adaptation to evolving data.
Papers
Incremental Online Learning of Randomized Neural Network with Forward Regularization
Junda Wang, Minghui Hu, Ning Li, Abdulaziz Al-Ali, Ponnuthurai Nagaratnam Suganthan
Adaptive Prototype Replay for Class Incremental Semantic Segmentation
Guilin Zhu, Dongyue Wu, Changxin Gao, Runmin Wang, Weidong Yang, Nong Sang
Slowing Down Forgetting in Continual Learning
Pascal Janetzky, Tobias Schlagenhauf, Stefan Feuerriegel
An Efficient Memory Module for Graph Few-Shot Class-Incremental Learning
Dong Li, Aijia Zhang, Junqi Gao, Biqing Qi
Inductive Graph Few-shot Class Incremental Learning
Yayong Li, Peyman Moghadam, Can Peng, Nan Ye, Piotr Koniusz