Paper ID: 2410.23751
EXACFS -- A CIL Method to mitigate Catastrophic Forgetting
S Balasubramanian, M Sai Subramaniam, Sai Sriram Talasu, P Yedu Krishna, Manepalli Pranav Phanindra Sai, Ravi Mukkamala, Darshan Gera
Deep neural networks (DNNS) excel at learning from static datasets but struggle with continual learning, where data arrives sequentially. Catastrophic forgetting, the phenomenon of forgetting previously learned knowledge, is a primary challenge. This paper introduces EXponentially Averaged Class-wise Feature Significance (EXACFS) to mitigate this issue in the class incremental learning (CIL) setting. By estimating the significance of model features for each learned class using loss gradients, gradually aging the significance through the incremental tasks and preserving the significant features through a distillation loss, EXACFS effectively balances remembering old knowledge (stability) and learning new knowledge (plasticity). Extensive experiments on CIFAR-100 and ImageNet-100 demonstrate EXACFS's superior performance in preserving stability while acquiring plasticity.
Submitted: Oct 31, 2024