Supervised Continual Learning
Supervised continual learning aims to train deep neural networks on a continuous stream of labeled data without catastrophic forgetting of previously learned information, prioritizing efficient updates over complete retraining. Current research focuses on improving efficiency and addressing the limitations of existing methods, exploring architectures like transformers and employing techniques such as rehearsal, prompting, and codebook-based approaches to enhance performance and mitigate forgetting. This field is crucial for developing more robust and resource-efficient AI systems capable of adapting to evolving data streams in real-world applications, such as personalized medicine and online learning environments.
Papers
July 26, 2024
March 3, 2024
November 25, 2023
March 29, 2023
March 19, 2023
August 30, 2022
August 14, 2022
April 26, 2022