Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Data exploitation: multi-task learning of object detection and semantic segmentation on partially annotated data
Hoàng-Ân Lê, Minh-Tan Pham
Rethinking and Improving Multi-task Learning for End-to-end Speech Translation
Yuhao Zhang, Chen Xu, Bei Li, Hao Chen, Tong Xiao, Chunliang Zhang, Jingbo Zhu
deep-REMAP: Parameterization of Stellar Spectra Using Regularized Multi-Task Learning
Sankalp Gilda
Spatio-Temporal Similarity Measure based Multi-Task Learning for Predicting Alzheimer's Disease Progression using MRI Data
Xulong Wang, Yu Zhang, Menghui Zhou, Tong Liu, Jun Qi, Po Yang
Multitask Kernel-based Learning with First-Order Logic Constraints
Michelangelo Diligenti, Marco Gori, Marco Maggini, Leonardo Rigutini