Prompt Transferability
Prompt transferability explores how effectively prompts, short instructions guiding large language models (LLMs) or other AI models, can be reused across different tasks or models. Current research focuses on improving the transferability of prompts across various domains (e.g., image classification, text generation, dialogue summarization) and model architectures (including vision-language models and LLMs), often employing techniques like prompt tuning, knowledge distillation, and contextual injection. This research is significant because it promises to improve the efficiency and effectiveness of AI systems by reducing the need for extensive retraining on new tasks, leading to more adaptable and resource-efficient AI applications.
Papers
September 16, 2024
June 19, 2024
May 11, 2024
March 28, 2024
December 24, 2023
May 20, 2023
May 17, 2023
August 22, 2022
November 12, 2021