Catastrophic Interference
Catastrophic interference describes the phenomenon where learning new information causes a neural network to forget previously learned information, hindering continual learning. Current research focuses on mitigating this effect through techniques like sparse adaptation (identifying and optimizing only a subset of model weights), interference-free low-rank adaptation, and weighted training methods that account for data distribution shifts during model retraining. Addressing catastrophic interference is crucial for improving the robustness and efficiency of machine learning models across diverse applications, ranging from recommendation systems and A/B testing to continual learning in robotics and natural language processing.
Papers
February 27, 2024
February 23, 2024
February 20, 2024
February 13, 2024
February 9, 2024
February 2, 2024
January 18, 2024
January 9, 2024
January 2, 2024
November 6, 2023
October 26, 2023
October 24, 2023
September 10, 2023
September 3, 2023
July 26, 2023
July 11, 2023
July 10, 2023
June 17, 2023
June 2, 2023
May 11, 2023