Multitask Learning
Multitask learning (MTL) aims to improve the efficiency and generalization of machine learning models by training them on multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference through techniques such as gradient projection and adaptive task weighting, efficiently estimating task relationships for improved model architecture design (e.g., using transformer-based architectures and graph neural networks), and mitigating biases in model outputs across different subgroups. MTL's impact spans diverse fields, enhancing performance in areas such as natural language processing, computer vision, and healthcare, by leveraging shared representations and improving data efficiency.
Papers
June 6, 2023
May 24, 2023
May 22, 2023
May 12, 2023
May 5, 2023
April 23, 2023
April 17, 2023
April 14, 2023
April 3, 2023
March 27, 2023
March 25, 2023
March 8, 2023
March 4, 2023
February 19, 2023
January 17, 2023
December 23, 2022
December 21, 2022
December 6, 2022