Model Transfer
Model transfer, a core technique in machine learning, aims to leverage knowledge learned from a source model (often pre-trained on massive datasets) to improve performance on a target task with limited data. Current research emphasizes efficient transfer methods, such as parameter-efficient fine-tuning and instruction-based task selection, often employing techniques like contrastive learning and manifold learning to enhance transferability across diverse domains and model architectures. This approach is crucial for addressing data scarcity in various fields, including medical diagnosis, natural language processing, and remote sensing, enabling the deployment of powerful models even when labeled data is limited.
Papers
May 28, 2024
April 25, 2024
April 12, 2024
August 9, 2023
May 16, 2023
March 16, 2023
February 22, 2023
February 21, 2023
January 17, 2023
December 28, 2022
June 8, 2022
April 13, 2022