Multitask Learning
Multitask learning (MTL) aims to improve the efficiency and generalization of machine learning models by training them on multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference through techniques such as gradient projection and adaptive task weighting, efficiently estimating task relationships for improved model architecture design (e.g., using transformer-based architectures and graph neural networks), and mitigating biases in model outputs across different subgroups. MTL's impact spans diverse fields, enhancing performance in areas such as natural language processing, computer vision, and healthcare, by leveraging shared representations and improving data efficiency.
Papers
November 24, 2022
November 11, 2022
October 26, 2022
October 20, 2022
October 19, 2022
October 7, 2022
October 4, 2022
October 3, 2022
September 16, 2022
September 15, 2022
September 7, 2022
August 22, 2022
August 7, 2022
July 12, 2022
July 10, 2022
July 3, 2022
June 27, 2022
June 22, 2022
June 21, 2022