Multi Task Transfer
Multi-task transfer learning aims to improve model performance on a target task by leveraging knowledge gained from training on related source tasks. Current research focuses on mitigating "negative transfer" – where source tasks hinder target task performance – through architectural innovations like modular networks (e.g., mixtures of experts) and task-specific adapters within transformer models. These approaches, often employing parameter-efficient fine-tuning techniques, seek to balance the benefits of shared knowledge with the need for task-specific adaptation, leading to improved efficiency and generalization across diverse tasks and domains.
Papers
May 26, 2024
March 1, 2024
February 23, 2024
February 13, 2024
July 28, 2023
July 24, 2023
September 21, 2022