Prompt Space
Prompt space research focuses on optimizing the effectiveness of textual prompts used to guide large language models (LLMs) and other deep learning models, particularly in low-data scenarios. Current research explores techniques like soft prompt tuning, which modifies embeddings rather than model weights, and the development of novel prompt architectures (e.g., global prompt cells) to improve performance and efficiency. This work aims to enhance the robustness, generalization, and parameter efficiency of these models across diverse tasks, impacting fields like medical image analysis and natural language processing by enabling better performance with less data and computational resources.
Papers
August 29, 2024
June 7, 2024
December 23, 2023
August 10, 2023
June 6, 2023
April 12, 2023