Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks
Zhengkun Zhang, Wenya Guo, Xiaojun Meng, Yasheng Wang, Yadao Wang, Xin Jiang, Qun Liu, Zhenglu Yang
YONO: Modeling Multiple Heterogeneous Neural Networks on Microcontrollers
Young D. Kwon, Jagmohan Chauhan, Cecilia Mascolo
Multi-Task Multi-Scale Learning For Outcome Prediction in 3D PET Images
Amine Amyar, Romain Modzelewski, Pierre Vera, Vincent Morard, Su Ruan
A multi-task learning for cavitation detection and cavitation intensity recognition of valve acoustic signals
Yu Sha, Johannes Faber, Shuiping Gou, Bo Liu, Wei Li, Stefan Schramm, Horst Stoecker, Thomas Steckenreiter, Domagoj Vnucec, Nadine Wetzstein, Andreas Widl, Kai Zhou
JOINED : Prior Guided Multi-task Learning for Joint Optic Disc/Cup Segmentation and Fovea Detection
Huaqing He, Li Lin, Zhiyuan Cai, Xiaoying Tang