Sequential Multi Task Learning
Sequential multi-task learning (SMTL) aims to improve model efficiency and performance by training a single model to perform multiple related tasks sequentially, leveraging knowledge transfer between them. Current research focuses on addressing challenges like task dependency and interference, employing techniques such as attention mechanisms, memory-enhanced architectures, and continual prompt tuning to optimize knowledge transfer and avoid catastrophic forgetting. This approach holds significant promise for improving the accuracy and efficiency of various applications, including natural language processing, computer vision, and legal tech, by enabling more robust and adaptable models.
Papers
July 14, 2023
October 31, 2022
September 6, 2022
March 16, 2022