Prompt Embeddings
Prompt embeddings are learned representations used to guide large language models (LLMs) and other foundation models, such as the Segment Anything Model (SAM), towards desired outputs without extensive fine-tuning. Current research focuses on developing methods for automatically generating these embeddings, often leveraging techniques like prompt tuning, meta-learning, and contrastive learning, to improve model performance on various tasks including image segmentation, text-to-image generation, and text classification. This approach offers parameter efficiency and improved control over model behavior, impacting fields like medical image analysis and natural language processing by enabling more adaptable and controllable AI systems.
Papers
November 15, 2024
November 2, 2024
October 1, 2024
September 30, 2024
September 10, 2024
August 23, 2024
June 26, 2024
June 12, 2024
June 10, 2024
May 22, 2024
April 8, 2024
December 18, 2023
September 14, 2023
September 11, 2023
August 29, 2023
August 23, 2023
August 6, 2023
June 25, 2023
May 30, 2023