Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning for Semantic Segmentation in Medical Imaging
Zdravko Marinov, Simon Reiß, David Kersting, Jens Kleesiek, Rainer Stiefelhagen
Object-Centric Multi-Task Learning for Human Instances
Hyeongseok Son, Sangil Jung, Solae Lee, Seongeun Kim, Seung-In Park, ByungIn Yoo
HiNet: Novel Multi-Scenario & Multi-Task Learning with Hierarchical Information Extraction
Jie Zhou, Xianshuai Cao, Wenhao Li, Lin Bo, Kun Zhang, Chuan Luo, Qian Yu
Gradient Coordination for Quantifying and Maximizing Knowledge Transference in Multi-Task Learning
Xuanhua Yang, Jianxin Zhao, Shaoguo Liu, Liang Wang, Bo Zheng
Adaptive Weight Assignment Scheme For Multi-task Learning
Aminul Huq, Mst Tasnim Pervin
Learning Language-Conditioned Deformable Object Manipulation with Graph Dynamics
Yuhong Deng, Kai Mo, Chongkun Xia, Xueqian Wang
Artificial Intelligence for Dementia Research Methods Optimization
Magda Bucholc, Charlotte James, Ahmad Al Khleifat, AmanPreet Badhwar, Natasha Clarke, Amir Dehsarvi, Christopher R. Madan, Sarah J. Marzi, Cameron Shand, Brian M. Schilder, Stefano Tamburin, Hanz M. Tantiangco, Ilianna Lourida, David J. Llewellyn, Janice M. Ranson
Gradient Remedy for Multi-Task Learning in End-to-End Noise-Robust Speech Recognition
Yuchen Hu, Chen Chen, Ruizhe Li, Qiushi Zhu, Eng Siong Chng
Recon: Reducing Conflicting Gradients from the Root for Multi-Task Learning
Guangyuan Shi, Qimai Li, Wenlong Zhang, Jiaxin Chen, Xiao-Ming Wu
Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks
Sudipta Kar, Giuseppe Castellucci, Simone Filice, Shervin Malmasi, Oleg Rokhlenko