Multi Task
Multi-task learning (MTL) aims to improve model efficiency and performance by training a single model to handle multiple related tasks simultaneously. Current research focuses on developing effective strategies for sharing information between tasks, including novel architectures like multi-expert systems and the adaptation of large language models (LLMs) for various applications. This approach is particularly valuable in scenarios with limited data or computational resources, finding applications in diverse fields such as medical image analysis, robotics, and online advertising, where improved efficiency and generalization are crucial.
Papers
A dual task learning approach to fine-tune a multilingual semantic speech encoder for Spoken Language Understanding
Gaëlle Laperrière, Sahar Ghannay, Bassam Jabaian, Yannick Estève
MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic
Yuyan Zhou, Liang Song, Bingning Wang, Weipeng Chen
3M: Multi-modal Multi-task Multi-teacher Learning for Game Event Detection
Thye Shan Ng, Feiqi Cao, Soyeon Caren Han
XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning
Alexander Nikulin, Ilya Zisman, Alexey Zemtsov, Viacheslav Sinii, Vladislav Kurenkov, Sergey Kolesnikov