Task Transferability
Task transferability in machine learning focuses on leveraging knowledge learned from one task (source) to improve performance on a different, related task (target), minimizing the need for extensive retraining. Current research emphasizes improving transferability across diverse modalities (image, text, audio, graph data) and tasks (classification, regression, generation), often employing foundation models and techniques like prompt engineering, and exploring the role of feature representations and model architectures (e.g., CNNs, Transformers, NSDEs). This research is crucial for enhancing the efficiency and robustness of machine learning systems, particularly in data-scarce scenarios and applications requiring adaptation to new domains or tasks, such as medical image analysis and robotics.
Papers
GIST: Generated Inputs Sets Transferability in Deep Learning
Florian Tambon, Foutse Khomh, Giuliano Antoniol
Transferability and explainability of deep learning emulators for regional climate model projections: Perspectives for future applications
Jorge Bano-Medina, Maialen Iturbide, Jesus Fernandez, Jose Manuel Gutierrez
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and Favorable Transferability For ViTs
Ao Wang, Hui Chen, Zijia Lin, Sicheng Zhao, Jungong Han, Guiguang Ding
Transferability of Representations Learned using Supervised Contrastive Learning Trained on a Multi-Domain Dataset
Alvin De Jun Tan, Clement Tan, Chai Kiat Yeo