Multitask Finetuning
Multitask finetuning adapts pre-trained large language models (LLMs) or foundation models to multiple downstream tasks simultaneously, aiming to improve efficiency and performance compared to training separate models for each task. Current research explores various architectures, including Mixture-of-Experts (MoE) and Low-Rank Adaptation (LoRA) methods, to enhance parameter efficiency and mitigate issues like catastrophic forgetting and negative transfer. This approach is significant because it offers improved generalization to unseen tasks, faster training, and more efficient deployment of powerful models across diverse applications, impacting fields like natural language processing, computer vision, and code generation.
Papers
October 10, 2024
October 9, 2024
August 11, 2024
March 6, 2024
February 22, 2024
November 4, 2023
July 13, 2023
June 19, 2023
February 7, 2023
December 2, 2022
November 3, 2022
May 22, 2022