Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Regularization Through Simultaneous Learning: A Case Study on Plant Classification
Pedro Henrique Nascimento Castro, Gabriel Cássia Fortuna, Rafael Alves Bonfim de Queiroz, Gladston Juliano Prates Moreira, Eduardo José da Silva Luz
Transferring Fairness using Multi-Task Learning with Limited Demographic Information
Carlos Aguirre, Mark Dredze
Rubik's Optical Neural Networks: Multi-task Learning with Physics-aware Rotation Architecture
Yingjie Li, Weilu Gao, Cunxi Yu
A Multi-Task Approach to Robust Deep Reinforcement Learning for Resource Allocation
Steffen Gracla, Carsten Bockelmann, Armin Dekorsy
Curriculum Modeling the Dependence among Targets with Multi-task Learning for Financial Marketing
Yunpeng Weng, Xing Tang, Liang Chen, Xiuqiang He