Multitask Learning
Multitask learning (MTL) aims to improve the efficiency and generalization of machine learning models by training them on multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference through techniques such as gradient projection and adaptive task weighting, efficiently estimating task relationships for improved model architecture design (e.g., using transformer-based architectures and graph neural networks), and mitigating biases in model outputs across different subgroups. MTL's impact spans diverse fields, enhancing performance in areas such as natural language processing, computer vision, and healthcare, by leveraging shared representations and improving data efficiency.
Papers
Hierarchical Bayesian Modelling for Knowledge Transfer Across Engineering Fleets via Multitask Learning
L. A. Bull, D. Di Francesco, M. Dhada, O. Steinert, T. Lindgren, A. K. Parlikad, A. B. Duncan, M. Girolami
Improving Feature Generalizability with Multitask Learning in Class Incremental Learning
Dong Ma, Chi Ian Tang, Cecilia Mascolo