Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Many-Objective Multi-Solution Transport
Ziyue Li, Tian Li, Virginia Smith, Jeff Bilmes, Tianyi Zhou
Automated Multi-Task Learning for Joint Disease Prediction on Electronic Health Records
Suhan Cui, Prasenjit Mitra
3D Object Visibility Prediction in Autonomous Driving
Chuanyu Luo, Nuo Cheng, Ren Zhong, Haipeng Jiang, Wenyu Chen, Aoli Wang, Pu Li
Multi-task Learning for Real-time Autonomous Driving Leveraging Task-adaptive Attention Generator
Wonhyeok Choi, Mingyu Shin, Hyukzae Lee, Jaehoon Cho, Jaehyeon Park, Sunghoon Im
Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models
Wenfeng Feng, Chuzhan Hao, Yuewei Zhang, Yu Han, Hao Wang
Speaker-Independent Dysarthria Severity Classification using Self-Supervised Transformers and Multi-Task Learning
Lauren Stumpf, Balasundaram Kadirvelu, Sigourney Waibel, A. Aldo Faisal
VEnvision3D: A Synthetic Perception Dataset for 3D Multi-Task Model Research
Jiahao Zhou, Chen Long, Yue Xie, Jialiang Wang, Boheng Li, Haiping Wang, Zhe Chen, Zhen Dong