Continual Learning Benchmark
Continual learning benchmarks evaluate machine learning models' ability to acquire new knowledge incrementally without forgetting previously learned information, a crucial aspect for real-world applications. Current research focuses on developing and testing novel algorithms and architectures, such as those employing low-rank adaptations, prompt engineering, and memory-based approaches, to mitigate "catastrophic forgetting" across diverse tasks and modalities (vision, language, audio). These benchmarks are vital for advancing the field by providing standardized evaluation protocols and driving the development of more robust and efficient continual learning systems, with implications for areas like personalized AI and autonomous systems.
Papers
PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection Dataset
Ghazal Alinezhad Noghre, Shanle Yao, Armin Danesh Pazho, Babak Rahimi Ardabili, Vinit Katariya, Hamed Tabkhi
Dual-CBA: Improving Online Continual Learning via Dual Continual Bias Adaptors from a Bi-level Optimization Perspective
Quanziang Wang, Renzhen Wang, Yichen Wu, Xixi Jia, Minghao Zhou, Deyu Meng