Incremental Learning
Incremental learning aims to enable machine learning models to continuously acquire new knowledge from sequential data streams without forgetting previously learned information, a challenge known as catastrophic forgetting. Current research focuses on developing algorithms and model architectures, such as those employing knowledge distillation, generative replay, and various regularization techniques, to address this issue across diverse applications like image classification, gesture recognition, and medical image analysis. This field is significant because it moves machine learning closer to human-like continuous learning capabilities, with potential impacts on personalized medicine, robotics, and other areas requiring adaptation to evolving data.
Papers
Neural Collapse Terminus: A Unified Solution for Class Incremental Learning and Its Variants
Yibo Yang, Haobo Yuan, Xiangtai Li, Jianlong Wu, Lefei Zhang, Zhouchen Lin, Philip Torr, Dacheng Tao, Bernard Ghanem
Balanced Destruction-Reconstruction Dynamics for Memory-replay Class Incremental Learning
Yuhang Zhou, Jiangchao Yao, Feng Hong, Ya Zhang, Yanfeng Wang
Federated Self-Learning with Weak Supervision for Speech Recognition
Milind Rao, Gopinath Chennupati, Gautam Tiwari, Anit Kumar Sahu, Anirudh Raju, Ariya Rastrow, Jasha Droppo
TADIL: Task-Agnostic Domain-Incremental Learning through Task-ID Inference using Transformer Nearest-Centroid Embeddings
Gusseppe Bravo-Rocca, Peini Liu, Jordi Guitart, Ajay Dholakia, David Ellison