Task Transferability
Task transferability in machine learning focuses on leveraging knowledge learned from one task (source) to improve performance on a different, related task (target), minimizing the need for extensive retraining. Current research emphasizes improving transferability across diverse modalities (image, text, audio, graph data) and tasks (classification, regression, generation), often employing foundation models and techniques like prompt engineering, and exploring the role of feature representations and model architectures (e.g., CNNs, Transformers, NSDEs). This research is crucial for enhancing the efficiency and robustness of machine learning systems, particularly in data-scarce scenarios and applications requiring adaptation to new domains or tasks, such as medical image analysis and robotics.
Papers
Transferability of coVariance Neural Networks and Application to Interpretable Brain Age Prediction using Anatomical Features
Saurabh Sihag, Gonzalo Mateos, Corey T. McMillan, Alejandro Ribeiro
HTPS: Heterogeneous Transferring Prediction System for Healthcare Datasets
Jia-Hao Syu, Jerry Chun-Wei Lin, Marcin Fojcik, Rafał Cupek