Continual Learning Benchmark
Continual learning benchmarks evaluate machine learning models' ability to acquire new knowledge incrementally without forgetting previously learned information, a crucial aspect for real-world applications. Current research focuses on developing and testing novel algorithms and architectures, such as those employing low-rank adaptations, prompt engineering, and memory-based approaches, to mitigate "catastrophic forgetting" across diverse tasks and modalities (vision, language, audio). These benchmarks are vital for advancing the field by providing standardized evaluation protocols and driving the development of more robust and efficient continual learning systems, with implications for areas like personalized AI and autonomous systems.
Papers
Primal Dual Continual Learning: Balancing Stability and Plasticity through Adaptive Memory Allocation
Juan Elenter, Navid NaderiAlizadeh, Tara Javidi, Alejandro Ribeiro
Towards a Unified Framework for Adaptable Problematic Content Detection via Continual Learning
Ali Omrani, Alireza S. Ziabari, Preni Golazizian, Jeffrey Sorensen, Morteza Dehghani
CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning
James Seale Smith, Leonid Karlinsky, Vyshnavi Gutta, Paola Cascante-Bonilla, Donghyun Kim, Assaf Arbelle, Rameswar Panda, Rogerio Feris, Zsolt Kira
Robust Mean Teacher for Continual and Gradual Test-Time Adaptation
Mario Döbler, Robert A. Marsden, Bin Yang