Multiple Task
Multiple task learning (MTL) aims to improve efficiency and generalization by training a single model to perform multiple related tasks simultaneously, rather than training separate models for each. Current research focuses on developing efficient architectures and algorithms, such as those incorporating Mixture-of-Experts, Low-Rank Adaptation, and hierarchical structures, to handle diverse tasks and data types, including text, images, and sensor data. These advancements are significant because they offer improved resource utilization, enhanced generalization capabilities, and potential for more robust and adaptable AI systems across various applications, from natural language processing to robotics.
Papers
October 8, 2024
June 26, 2024
May 19, 2024
April 23, 2024
February 13, 2024
January 18, 2024
March 8, 2023
October 18, 2022
October 6, 2022
October 5, 2022
October 1, 2022
August 2, 2022
May 29, 2022
April 28, 2022
April 14, 2022
April 6, 2022