Catastrophic Forgetting
Catastrophic forgetting describes the phenomenon where artificial neural networks, upon learning new tasks, lose previously acquired knowledge. Current research focuses on mitigating this issue through various strategies, including parameter-efficient fine-tuning methods (like LoRA), generative model-based data replay, and novel optimization algorithms that constrain gradient updates or leverage hierarchical task structures. Addressing catastrophic forgetting is crucial for developing robust and adaptable AI systems capable of continuous learning in real-world applications, particularly in domains like medical imaging, robotics, and natural language processing where data streams are constantly evolving.
Papers
Forward-Backward Knowledge Distillation for Continual Clustering
Mohammadreza Sadeghi, Zihan Wang, Narges Armanfard
Federated Continual Learning Goes Online: Uncertainty-Aware Memory Management for Vision Tasks and Beyond
Giuseppe Serra, Florian Buettner
Learning to Continually Learn with the Bayesian Principle
Soochan Lee, Hyeonseong Jeon, Jaehyeon Son, Gunhee Kim
HyperInterval: Hypernetwork approach to training weight interval regions in continual learning
Patryk Krukowski, Anna Bielawska, Kamil Książek, Paweł Wawrzyński, Paweł Batorski, Przemysław Spurek
Rethinking Class-Incremental Learning from a Dynamic Imbalanced Learning Perspective
Leyuan Wang, Liuyu Xiang, Yunlong Wang, Huijia Wu, Zhaofeng He
Exploring the Evolution of Hidden Activations with Live-Update Visualization
Xianglin Yang, Jin Song Dong
Rehearsal-free Federated Domain-incremental Learning
Rui Sun, Haoran Duan, Jiahua Dong, Varun Ojha, Tejal Shah, Rajiv Ranjan
Continual Learning in Medical Imaging: A Survey and Practical Analysis
Mohammad Areeb Qazi, Anees Ur Rehman Hashmi, Santosh Sanjeev, Ibrahim Almakky, Numan Saeed, Camila Gonzalez, Mohammad Yaqub
Gradient Projection For Continual Parameter-Efficient Tuning
Jingyang Qiao, Zhizhong Zhang, Xin Tan, Yanyun Qu, Wensheng Zhang, Zhi Han, Yuan Xie