Task Transferability
Task transferability in machine learning focuses on leveraging knowledge learned from one task (source) to improve performance on a different, related task (target), minimizing the need for extensive retraining. Current research emphasizes improving transferability across diverse modalities (image, text, audio, graph data) and tasks (classification, regression, generation), often employing foundation models and techniques like prompt engineering, and exploring the role of feature representations and model architectures (e.g., CNNs, Transformers, NSDEs). This research is crucial for enhancing the efficiency and robustness of machine learning systems, particularly in data-scarce scenarios and applications requiring adaptation to new domains or tasks, such as medical image analysis and robotics.
Papers
Quantifying the Impact of Data Characteristics on the Transferability of Sleep Stage Scoring Models
Akara Supratak, Peter Haddawy
Improving the Transferability of Adversarial Samples by Path-Augmented Method
Jianping Zhang, Jen-tse Huang, Wenxuan Wang, Yichen Li, Weibin Wu, Xiaosen Wang, Yuxin Su, Michael R. Lyu