Multi Task
Multi-task learning (MTL) aims to improve model efficiency and performance by training a single model to handle multiple related tasks simultaneously. Current research focuses on developing effective strategies for sharing information between tasks, including novel architectures like multi-expert systems and the adaptation of large language models (LLMs) for various applications. This approach is particularly valuable in scenarios with limited data or computational resources, finding applications in diverse fields such as medical image analysis, robotics, and online advertising, where improved efficiency and generalization are crucial.
Papers
MedUniSeg: 2D and 3D Medical Image Segmentation via a Prompt-driven Universal Model
Yiwen Ye, Ziyang Chen, Jianpeng Zhang, Yutong Xie, Yong Xia
$M^3EL$: A Multi-task Multi-topic Dataset for Multi-modal Entity Linking
Fang Wang, Shenglin Yin, Xiaoying Bai, Minghao Hu, Tianwei Yan, Yi Liang
A Parameter Update Balancing Algorithm for Multi-task Ranking Models in Recommendation Systems
Jun Yuan, Guohao Cai, Zhenhua Dong