P Tuning
P-tuning and its variants are parameter-efficient fine-tuning methods for large language models (LLMs) and other pre-trained models, aiming to adapt these models to new tasks with minimal parameter updates. Current research focuses on improving efficiency and effectiveness through techniques like prompt engineering, adapting hidden state manipulations, and leveraging label representations to guide the tuning process. These methods offer significant advantages in reducing computational costs and memory requirements for adapting LLMs to diverse downstream tasks, impacting both research and practical applications by enabling more efficient and scalable deployment of powerful models.
Papers
November 2, 2024
October 11, 2024
April 22, 2024
December 21, 2023
November 25, 2023
May 18, 2023
April 27, 2023
April 10, 2022
February 20, 2022