Catastrophic Interference
Catastrophic interference describes the phenomenon where learning new information causes a neural network to forget previously learned information, hindering continual learning. Current research focuses on mitigating this effect through techniques like sparse adaptation (identifying and optimizing only a subset of model weights), interference-free low-rank adaptation, and weighted training methods that account for data distribution shifts during model retraining. Addressing catastrophic interference is crucial for improving the robustness and efficiency of machine learning models across diverse applications, ranging from recommendation systems and A/B testing to continual learning in robotics and natural language processing.
Papers
May 10, 2023
May 4, 2023
March 18, 2023
March 6, 2023
February 21, 2023
February 15, 2023
February 14, 2023
December 14, 2022
December 9, 2022
December 7, 2022
November 29, 2022
November 25, 2022
November 18, 2022
September 11, 2022
July 16, 2022
April 22, 2022
April 20, 2022
April 18, 2022
December 16, 2021