Single Task Learning
Single-task learning (STL) focuses on training separate models for each specific task, prioritizing individual task performance and avoiding potential negative interference from related tasks. Current research explores strategies to improve STL's efficiency and effectiveness, including leveraging parameter averaging across independently trained models and investigating the use of pre-trained large language models as universal decoders for diverse tasks. This approach is particularly relevant in resource-constrained scenarios or when task relationships are complex or poorly understood, offering a robust alternative to multi-task learning in various applications, from clinical diagnosis to robotic control.
Papers
April 2, 2022
January 29, 2022
January 25, 2022
January 11, 2022
December 2, 2021
November 25, 2021