Continual Fine Tuning
Continual fine-tuning focuses on adapting pre-trained models to new tasks sequentially, without catastrophic forgetting of previously learned information. Current research emphasizes mitigating this forgetting through techniques like low-rank adaptation (LoRA), parameter-efficient fine-tuning (PEFT), and novel regularization methods applied to various architectures including transformers and convolutional neural networks. This area is crucial for efficient model updates in resource-constrained environments and enables personalized and adaptive AI systems across diverse applications like healthcare and autonomous driving, reducing the need for extensive retraining.
19papers
Papers
April 18, 2025
November 7, 2024
October 21, 2024
October 8, 2024
October 7, 2024
September 24, 2024
September 14, 2024
March 17, 2024
February 29, 2024
January 25, 2024
September 26, 2023
September 5, 2023
August 25, 2023