Continuous Learning
Continuous learning (CL) aims to enable artificial intelligence models to learn new tasks sequentially without forgetting previously acquired knowledge, a challenge known as catastrophic forgetting. Current research focuses on mitigating this forgetting through techniques like knowledge distillation, replay buffers, and regularization, often leveraging pre-trained models (e.g., transformers) and exploring various architectures including spiking neural networks. The development of robust and efficient CL methods holds significant importance for deploying AI in dynamic real-world environments, impacting fields such as robotics, autonomous systems, and personalized medicine where continuous adaptation is crucial.
Papers
November 10, 2024
November 4, 2024
October 21, 2024
October 1, 2024
September 9, 2024
August 15, 2024
June 28, 2024
May 29, 2024
May 27, 2024
May 5, 2024
April 14, 2024
March 21, 2024
February 21, 2024
February 17, 2024
February 15, 2024
February 2, 2024
December 27, 2023
December 19, 2023