Continual Training
Continual training aims to enable machine learning models, particularly large language and vision models, to adapt to new data streams without catastrophic forgetting of previously learned information. Current research focuses on developing efficient algorithms and architectures, such as parameter-efficient fine-tuning methods and replay strategies, to address this challenge across various model types, including transformers and recurrent neural networks. This field is crucial for developing more sustainable and adaptable AI systems, improving their performance in dynamic real-world environments and reducing the environmental impact of frequent retraining.
Papers
October 21, 2024
September 27, 2024
September 26, 2024
September 13, 2024
August 18, 2024
July 15, 2024
July 3, 2024
June 13, 2024
June 8, 2024
May 25, 2024
April 12, 2024
February 15, 2024
February 2, 2024
January 13, 2024
December 28, 2023
November 8, 2023
November 1, 2023
October 24, 2023
September 26, 2023