Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
StreamSpeech: Simultaneous Speech-to-Speech Translation with Multi-task Learning
Shaolei Zhang, Qingkai Fang, Shoutao Guo, Zhengrui Ma, Min Zhang, Yang Feng
Giving each task what it needs -- leveraging structured sparsity for tailored multi-task learning
Richa Upadhyay, Ronald Phlypo, Rajkumar Saini, Marcus Liwicki
Quantifying Task Priority for Multi-Task Optimization
Wooseong Jeong, Kuk-Jin Yoon