Continual Learner
Continual learning aims to enable artificial intelligence models to learn new tasks sequentially without forgetting previously acquired knowledge, a challenge known as catastrophic forgetting. Current research focuses on developing novel regularization techniques, exploring the role of model architecture (including MLPs and Vision Transformers), and leveraging generative models and large language models to improve knowledge retention and transfer. These advancements are crucial for building more robust and adaptable AI systems, with applications ranging from autonomous driving and robotics to medical image analysis and personalized education.
Papers
July 11, 2024
June 10, 2024
May 24, 2024
April 26, 2024
January 9, 2024
November 29, 2023
November 20, 2023
October 2, 2023
August 21, 2023
July 20, 2023
June 21, 2023
May 25, 2023
May 19, 2023
May 15, 2023
March 20, 2023
March 14, 2023
March 9, 2023
February 28, 2023
January 26, 2023