Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Multi-Task Learning for Integrated Automated Contouring and Voxel-Based Dose Prediction in Radiotherapy
Sangwook Kim, Aly Khalifa, Thomas G. Purdie, Chris McIntosh
Proactive Gradient Conflict Mitigation in Multi-Task Learning: A Sparse Training Perspective
Zhi Zhang, Jiayi Shen, Congfeng Cao, Gaole Dai, Shiji Zhou, Qizhe Zhang, Shanghang Zhang, Ekaterina Shutova
Task Arithmetic Through The Lens Of One-Shot Federated Learning
Zhixu Tao, Ian Mason, Sanjeev Kulkarni, Xavier Boix
ATM: Improving Model Merging by Alternating Tuning and Merging
Luca Zhou, Daniele Solombrino, Donato Crisostomi, Maria Sofia Bucarelli, Fabrizio Silvestri, Emanuele Rodolà
Advancing Robust Underwater Acoustic Target Recognition through Multi-task Learning and Multi-Gate Mixture-of-Experts
Yuan Xie, Jiawei Ren, Junfeng Li, Ji Xu