Catastrophic Interference
Catastrophic interference describes the phenomenon where learning new information causes a neural network to forget previously learned information, hindering continual learning. Current research focuses on mitigating this effect through techniques like sparse adaptation (identifying and optimizing only a subset of model weights), interference-free low-rank adaptation, and weighted training methods that account for data distribution shifts during model retraining. Addressing catastrophic interference is crucial for improving the robustness and efficiency of machine learning models across diverse applications, ranging from recommendation systems and A/B testing to continual learning in robotics and natural language processing.
Papers
Explainable Artificial Intelligence for Quantifying Interfering and High-Risk Behaviors in Autism Spectrum Disorder in a Real-World Classroom Environment Using Privacy-Preserving Video Analysis
Barun Das, Conor Anderson, Tania Villavicencio, Johanna Lantz, Jenny Foster, Theresa Hamlin, Ali Bahrami Rad, Gari D. Clifford, Hyeokhyen Kwon
DD-rPPGNet: De-interfering and Descriptive Feature Learning for Unsupervised rPPG Estimation
Pei-Kai Huang, Tzu-Hsien Chen, Ya-Ting Chan, Kuan-Wen Chen, Chiou-Ting Hsu