Exemplar Free Method

Exemplar-free continual learning aims to train machine learning models sequentially on new tasks without retaining data from previous tasks, addressing the challenge of catastrophic forgetting. Current research focuses on developing algorithms and architectures, including convolutional neural networks and vision transformers, that mitigate forgetting through techniques like adversarial perturbation, analytic classifiers, and gated class-attention mechanisms. This research area is significant because it enables efficient and privacy-preserving continual learning, crucial for applications where data storage is limited or data privacy is paramount. The development of effective exemplar-free methods is driving progress in areas like online learning and unsupervised continual learning.

Papers