Frozen Pre Trained Language Model
Frozen pre-trained language models (PLMs) are being extensively studied as a parameter-efficient alternative to full model fine-tuning for various natural language processing tasks. Research focuses on adapting these frozen models using techniques like prompt tuning (including variations like XPrompt and input-tuning), differentiable prompting, and adapter modules, aiming to improve performance on downstream tasks with minimal additional training. This approach offers significant advantages in terms of computational cost and resource efficiency, particularly beneficial for low-resource settings and specialized domains like clinical applications, while achieving performance comparable to or even exceeding fully fine-tuned smaller models.
Papers
October 21, 2024
October 25, 2023
July 4, 2023
May 19, 2023
March 9, 2023
October 10, 2022
August 10, 2022
May 11, 2022
March 7, 2022