Multitask Learning
Multitask learning (MTL) aims to improve the efficiency and generalization of machine learning models by training them on multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference through techniques such as gradient projection and adaptive task weighting, efficiently estimating task relationships for improved model architecture design (e.g., using transformer-based architectures and graph neural networks), and mitigating biases in model outputs across different subgroups. MTL's impact spans diverse fields, enhancing performance in areas such as natural language processing, computer vision, and healthcare, by leveraging shared representations and improving data efficiency.
Papers
January 16, 2024
December 21, 2023
December 9, 2023
December 5, 2023
November 1, 2023
October 26, 2023
October 20, 2023
October 5, 2023
September 15, 2023
September 14, 2023
August 31, 2023
August 30, 2023
August 28, 2023
August 25, 2023
August 15, 2023
August 5, 2023
August 3, 2023
July 16, 2023
July 3, 2023
June 26, 2023