Continual Learning
Continual learning aims to enable artificial intelligence models to learn new tasks sequentially without forgetting previously acquired knowledge, mirroring human learning capabilities. Current research focuses on mitigating "catastrophic forgetting" through techniques like experience replay, regularization, parameter isolation, and the use of parameter-efficient fine-tuning methods such as Low-Rank Adaptation (LoRA) and prompt tuning within various architectures including transformers and convolutional neural networks. This field is crucial for developing robust and adaptable AI systems across diverse applications, from autonomous driving and robotics to medical image analysis and personalized education, where continuous adaptation to new data is essential.
Papers
An Attention-based Representation Distillation Baseline for Multi-Label Continual Learning
Martin Menabue, Emanuele Frascaroli, Matteo Boschini, Lorenzo Bonicelli, Angelo Porrello, Simone Calderara
Continual Learning for Remote Physiological Measurement: Minimize Forgetting and Simplify Inference
Qian Liang, Yan Chen, Yang Hu