Task Transferability
Task transferability in machine learning focuses on leveraging knowledge learned from one task (source) to improve performance on a different, related task (target), minimizing the need for extensive retraining. Current research emphasizes improving transferability across diverse modalities (image, text, audio, graph data) and tasks (classification, regression, generation), often employing foundation models and techniques like prompt engineering, and exploring the role of feature representations and model architectures (e.g., CNNs, Transformers, NSDEs). This research is crucial for enhancing the efficiency and robustness of machine learning systems, particularly in data-scarce scenarios and applications requiring adaptation to new domains or tasks, such as medical image analysis and robotics.
Papers
Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Ruifei He, Shuyang Sun, Jihan Yang, Song Bai, Xiaojuan Qi
PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks
Nan Ding, Xi Chen, Tomer Levinboim, Beer Changpinyo, Radu Soricut