Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Speaker-Independent Dysarthria Severity Classification using Self-Supervised Transformers and Multi-Task Learning
Lauren Stumpf, Balasundaram Kadirvelu, Sigourney Waibel, A. Aldo Faisal
VEnvision3D: A Synthetic Perception Dataset for 3D Multi-Task Model Research
Jiahao Zhou, Chen Long, Yue Xie, Jialiang Wang, Boheng Li, Haiping Wang, Zhe Chen, Zhen Dong
Fair Resource Allocation in Multi-Task Learning
Hao Ban, Kaiyi Ji
Towards Principled Task Grouping for Multi-Task Learning
Chenguang Wang, Xuanhao Pan, Tianshu Yu
Multi-Task Learning for Routing Problem with Cross-Problem Zero-Shot Generalization
Fei Liu, Xi Lin, Zhenkun Wang, Qingfu Zhang, Xialiang Tong, Mingxuan Yuan
Unified View of Grokking, Double Descent and Emergent Abilities: A Perspective from Circuits Competition
Yufei Huang, Shengding Hu, Xu Han, Zhiyuan Liu, Maosong Sun