Frozen Pre Trained Language Model

Frozen pre-trained language models (PLMs) are being extensively studied as a parameter-efficient alternative to full model fine-tuning for various natural language processing tasks. Research focuses on adapting these frozen models using techniques like prompt tuning (including variations like XPrompt and input-tuning), differentiable prompting, and adapter modules, aiming to improve performance on downstream tasks with minimal additional training. This approach offers significant advantages in terms of computational cost and resource efficiency, particularly beneficial for low-resource settings and specialized domains like clinical applications, while achieving performance comparable to or even exceeding fully fine-tuned smaller models.

Papers