Incremental Learning
Incremental learning aims to enable machine learning models to continuously acquire new knowledge from sequential data streams without forgetting previously learned information, a challenge known as catastrophic forgetting. Current research focuses on developing algorithms and model architectures, such as those employing knowledge distillation, generative replay, and various regularization techniques, to address this issue across diverse applications like image classification, gesture recognition, and medical image analysis. This field is significant because it moves machine learning closer to human-like continuous learning capabilities, with potential impacts on personalized medicine, robotics, and other areas requiring adaptation to evolving data.
Papers
Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data
Yu-Ming Tang, Yi-Xing Peng, Wei-Shi Zheng
An Efficient Domain-Incremental Learning Approach to Drive in All Weather Conditions
M. Jehanzeb Mirza, Marc Masana, Horst Possegger, Horst Bischof
Modeling Missing Annotations for Incremental Learning in Object Detection
Fabio Cermelli, Antonino Geraci, Dario Fontanel, Barbara Caputo
Energy-based Latent Aligner for Incremental Learning
K J Joseph, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, Vineeth N Balasubramanian
Doodle It Yourself: Class Incremental Learning by Drawing a Few Sketches
Ayan Kumar Bhunia, Viswanatha Reddy Gajjala, Subhadeep Koley, Rohit Kundu, Aneeshan Sain, Tao Xiang, Yi-Zhe Song