Multi Task
Multi-task learning (MTL) aims to improve model efficiency and performance by training a single model to handle multiple related tasks simultaneously. Current research focuses on developing effective strategies for sharing information between tasks, including novel architectures like multi-expert systems and the adaptation of large language models (LLMs) for various applications. This approach is particularly valuable in scenarios with limited data or computational resources, finding applications in diverse fields such as medical image analysis, robotics, and online advertising, where improved efficiency and generalization are crucial.
Papers
M$^{3}$D: A Multimodal, Multilingual and Multitask Dataset for Grounded Document-level Information Extraction
Jiang Liu, Bobo Li, Xinran Yang, Na Yang, Hao Fei, Mingyao Zhang, Fei Li, Donghong Ji
MT3DNet: Multi-Task learning Network for 3D Surgical Scene Reconstruction
Mithun Parab, Pranay Lendave, Jiyoung Kim, Thi Quynh Dan Nguyen, Palash Ingle
Cooperative and Collaborative Multi-Task Semantic Communication for Distributed Sources
Ahmad Halimi Razlighi, Maximilian H. V. Tillmann, Edgar Beck, Carsten Bockelmann, Armin Dekorsy
A Multi-Task Role-Playing Agent Capable of Imitating Character Linguistic Styles
Siyuan Chen, Qingyi Si, Chenxu Yang, Yunzhi Liang, Zheng Lin, Huan Liu, Weiping Wang