Catastrophic Interference

Catastrophic interference describes the phenomenon where learning new information causes a neural network to forget previously learned information, hindering continual learning. Current research focuses on mitigating this effect through techniques like sparse adaptation (identifying and optimizing only a subset of model weights), interference-free low-rank adaptation, and weighted training methods that account for data distribution shifts during model retraining. Addressing catastrophic interference is crucial for improving the robustness and efficiency of machine learning models across diverse applications, ranging from recommendation systems and A/B testing to continual learning in robotics and natural language processing.

Papers