Task Transferability
Task transferability in machine learning focuses on leveraging knowledge learned from one task (source) to improve performance on a different, related task (target), minimizing the need for extensive retraining. Current research emphasizes improving transferability across diverse modalities (image, text, audio, graph data) and tasks (classification, regression, generation), often employing foundation models and techniques like prompt engineering, and exploring the role of feature representations and model architectures (e.g., CNNs, Transformers, NSDEs). This research is crucial for enhancing the efficiency and robustness of machine learning systems, particularly in data-scarce scenarios and applications requiring adaptation to new domains or tasks, such as medical image analysis and robotics.
Papers
Exploring the Transferability of Visual Prompting for Multimodal Large Language Models
Yichi Zhang, Yinpeng Dong, Siyuan Zhang, Tianzan Min, Hang Su, Jun Zhu
Fact :Teaching MLLMs with Faithful, Concise and Transferable Rationales
Minghe Gao, Shuang Chen, Liang Pang, Yuan Yao, Jisheng Dang, Wenqiao Zhang, Juncheng Li, Siliang Tang, Yueting Zhuang, Tat-Seng Chua