Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Enhancing and Adversarial: Improve ASR with Speaker Labels
Wei Zhou, Haotian Wu, Jingjing Xu, Mohammad Zeineldeen, Christoph Lüscher, Ralf Schlüter, Hermann Ney
Helping the Weak Makes You Strong: Simple Multi-Task Learning Improves Non-Autoregressive Translators
Xinyou Wang, Zaixiang Zheng, Shujian Huang
Multi-task Bias-Variance Trade-off Through Functional Constraints
Juan Cervino, Juan Andres Bazerque, Miguel Calvo-Fullana, Alejandro Ribeiro
Automatic Severity Classification of Dysarthric speech by using Self-supervised Model with Multi-task Learning
Eun Jung Yeo, Kwanghee Choi, Sunhee Kim, Minhwa Chung
Is Multi-Task Learning an Upper Bound for Continual Learning?
Zihao Wu, Huy Tran, Hamed Pirsiavash, Soheil Kolouri
M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang
Analyzing Multi-Task Learning for Abstractive Text Summarization
Frederic Kirstein, Jan Philip Wahle, Terry Ruas, Bela Gipp