Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Multi-task Bias-Variance Trade-off Through Functional Constraints
Juan Cervino, Juan Andres Bazerque, Miguel Calvo-Fullana, Alejandro Ribeiro
Automatic Severity Classification of Dysarthric speech by using Self-supervised Model with Multi-task Learning
Eun Jung Yeo, Kwanghee Choi, Sunhee Kim, Minhwa Chung
Is Multi-Task Learning an Upper Bound for Continual Learning?
Zihao Wu, Huy Tran, Hamed Pirsiavash, Soheil Kolouri
M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang
Analyzing Multi-Task Learning for Abstractive Text Summarization
Frederic Kirstein, Jan Philip Wahle, Terry Ruas, Bela Gipp
Entity Tracking via Effective Use of Multi-Task Learning Model and Mention-guided Decoding
Janvijay Singh, Fan Bai, Zhen Wang
Semantic Cross Attention for Few-shot Learning
Bin Xiao, Chien-Liang Liu, Wen-Hoar Hsaio
Task Compass: Scaling Multi-task Pre-training with Task Prefix
Zhuosheng Zhang, Shuohang Wang, Yichong Xu, Yuwei Fang, Wenhao Yu, Yang Liu, Hai Zhao, Chenguang Zhu, Michael Zeng
Optimizing Evaluation Metrics for Multi-Task Learning via the Alternating Direction Method of Multipliers
Ge-Yang Ke, Yan Pan, Jian Yin, Chang-Qin Huang