Single Task Learning
Single-task learning (STL) focuses on training separate models for each specific task, prioritizing individual task performance and avoiding potential negative interference from related tasks. Current research explores strategies to improve STL's efficiency and effectiveness, including leveraging parameter averaging across independently trained models and investigating the use of pre-trained large language models as universal decoders for diverse tasks. This approach is particularly relevant in resource-constrained scenarios or when task relationships are complex or poorly understood, offering a robust alternative to multi-task learning in various applications, from clinical diagnosis to robotic control.
Papers
October 8, 2024
August 5, 2024
June 26, 2024
June 18, 2024
June 6, 2024
March 6, 2024
November 7, 2023
October 28, 2023
October 24, 2023
October 16, 2023
July 7, 2023
March 31, 2023
February 18, 2023
December 15, 2022
November 24, 2022
September 30, 2022
September 24, 2022
September 20, 2022