Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Robust Analysis of Multi-Task Learning Efficiency: New Benchmarks on Light-Weighed Backbones and Effective Measurement of Multi-Task Learning Challenges by Feature Disentanglement
Dayou Mao, Yuhao Chen, Yifan Wu, Maximilian Gilles, Alexander Wong
Representation Surgery for Multi-Task Model Merging
Enneng Yang, Li Shen, Zhenyi Wang, Guibing Guo, Xiaojun Chen, Xingwei Wang, Dacheng Tao
Deep Nonlinear Hyperspectral Unmixing Using Multi-task Learning
Saeid Mehrdad, Seyed AmirHossein Janani