Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server
Fei Li, Chu Kiong Loo, Wei Shiung Liew, Xiaofeng Liu
Open Knowledge Base Canonicalization with Multi-task Learning
Bingchen Liu, Huang Peng, Weixin Zeng, Xiang Zhao, Shijun Liu, Li Pan
Leveraging Large Language Model-based Room-Object Relationships Knowledge for Enhancing Multimodal-Input Object Goal Navigation
Leyuan Sun, Asako Kanezaki, Guillaume Caron, Yusuke Yoshiyasu
Many-Objective Multi-Solution Transport
Ziyue Li, Tian Li, Virginia Smith, Jeff Bilmes, Tianyi Zhou
Automated Multi-Task Learning for Joint Disease Prediction on Electronic Health Records
Suhan Cui, Prasenjit Mitra
3D Object Visibility Prediction in Autonomous Driving
Chuanyu Luo, Nuo Cheng, Ren Zhong, Haipeng Jiang, Wenyu Chen, Aoli Wang, Pu Li
Multi-task Learning for Real-time Autonomous Driving Leveraging Task-adaptive Attention Generator
Wonhyeok Choi, Mingyu Shin, Hyukzae Lee, Jaehoon Cho, Jaehyeon Park, Sunghoon Im