Catastrophic Forgetting
Catastrophic forgetting describes the phenomenon where artificial neural networks, upon learning new tasks, lose previously acquired knowledge. Current research focuses on mitigating this issue through various strategies, including parameter-efficient fine-tuning methods (like LoRA), generative model-based data replay, and novel optimization algorithms that constrain gradient updates or leverage hierarchical task structures. Addressing catastrophic forgetting is crucial for developing robust and adaptable AI systems capable of continuous learning in real-world applications, particularly in domains like medical imaging, robotics, and natural language processing where data streams are constantly evolving.
Papers
CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning
Erum Mushtaq, Duygu Nur Yaldiz, Yavuz Faruk Bakman, Jie Ding, Chenyang Tao, Dimitrios Dimitriadis, Salman Avestimehr
SwitchCIT: Switching for Continual Instruction Tuning
Xinbo Wu, Max Hartman, Vidhata Arjun Jayaraman, Lav R. Varshney
Bridge Past and Future: Overcoming Information Asymmetry in Incremental Object Detection
Qijie Mo, Yipeng Gao, Shenghao Fu, Junkai Yan, Ancong Wu, Wei-Shi Zheng
Mitigating Catastrophic Forgetting in Language Transfer via Model Merging
Anton Alexandrov, Veselin Raychev, Mark Niklas Müller, Ce Zhang, Martin Vechev, Kristina Toutanova
BiasPruner: Debiased Continual Learning for Medical Image Classification
Nourhan Bayasi, Jamil Fayyad, Alceu Bissoto, Ghassan Hamarneh, Rafeef Garbi