Incremental Transfer Learning
Incremental transfer learning aims to sequentially train a single model on multiple datasets, improving performance on all while mitigating "catastrophic forgetting"—the loss of previously learned knowledge. Current research focuses on developing methods that balance plasticity (adapting to new data) and stability (retaining old knowledge), often employing techniques like regularization and feature embedding combinations within adaptable model architectures. This approach is particularly valuable in applications like multi-center medical image analysis, where data sharing is restricted, enabling the development of robust models from decentralized datasets while addressing privacy concerns.
Papers
September 29, 2023
September 14, 2022
June 3, 2022