Catastrophic Forgetting
Catastrophic forgetting describes the phenomenon where artificial neural networks, upon learning new tasks, lose previously acquired knowledge. Current research focuses on mitigating this issue through various strategies, including parameter-efficient fine-tuning methods (like LoRA), generative model-based data replay, and novel optimization algorithms that constrain gradient updates or leverage hierarchical task structures. Addressing catastrophic forgetting is crucial for developing robust and adaptable AI systems capable of continuous learning in real-world applications, particularly in domains like medical imaging, robotics, and natural language processing where data streams are constantly evolving.
Papers
May 12, 2022
May 11, 2022
May 6, 2022
May 4, 2022
April 26, 2022
April 21, 2022
April 19, 2022
April 17, 2022
April 13, 2022
April 10, 2022
April 8, 2022
April 5, 2022
April 2, 2022
March 30, 2022
March 27, 2022
March 26, 2022
March 25, 2022
March 24, 2022
March 22, 2022
March 12, 2022