Online Continual Learning
Online continual learning (OCL) focuses on training machine learning models that can adapt to continuously arriving, non-stationary data streams without catastrophic forgetting of previously learned information. Current research emphasizes efficient algorithms that address issues like model throughput limitations, biased forgetting towards newer tasks, and imbalanced data distributions, often employing techniques like experience replay, contrastive learning, and adaptive bias correction within various neural network architectures, including transformers and ResNets. OCL's significance lies in its potential to create more robust and adaptable AI systems for real-world applications such as autonomous driving, fault diagnosis, and personalized recommendation systems, where continuous learning from evolving data is crucial.
Papers
CQural: A Novel CNN based Hybrid Architecture for Quantum Continual Machine Learning
Sanyam Jain
Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?
Hasan Abed Al Kader Hammoud, Ameya Prabhu, Ser-Nam Lim, Philip H. S. Torr, Adel Bibi, Bernard Ghanem
Online Continual Learning Without the Storage Constraint
Ameya Prabhu, Zhipeng Cai, Puneet Dokania, Philip Torr, Vladlen Koltun, Ozan Sener
Real-Time Evaluation in Online Continual Learning: A New Hope
Yasir Ghunaim, Adel Bibi, Kumail Alhamoud, Motasem Alfarra, Hasan Abed Al Kader Hammoud, Ameya Prabhu, Philip H. S. Torr, Bernard Ghanem
Online Continual Learning via the Knowledge Invariant and Spread-out Properties
Ya-nan Han, Jian-wei Liu