Multitask Learning
Multitask learning (MTL) aims to improve the efficiency and generalization of machine learning models by training them on multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference through techniques such as gradient projection and adaptive task weighting, efficiently estimating task relationships for improved model architecture design (e.g., using transformer-based architectures and graph neural networks), and mitigating biases in model outputs across different subgroups. MTL's impact spans diverse fields, enhancing performance in areas such as natural language processing, computer vision, and healthcare, by leveraging shared representations and improving data efficiency.
Papers
November 9, 2024
October 23, 2024
October 18, 2024
October 12, 2024
October 9, 2024
September 9, 2024
September 3, 2024
August 29, 2024
August 7, 2024
July 23, 2024
June 22, 2024
June 20, 2024
June 14, 2024
June 13, 2024
May 13, 2024
May 7, 2024
March 10, 2024
February 23, 2024
February 16, 2024