Input Dependent Prompt
Input-dependent prompting focuses on dynamically tailoring prompts for large vision-language models (VLMs) to improve performance on diverse downstream tasks, particularly in open-set and few-shot learning scenarios. Current research emphasizes developing methods to generate these prompts, including approaches based on test-time tuning, composable prompts, and dual-context learning, often leveraging the capabilities of LLMs to enhance prompt design. This research is significant because it allows for efficient adaptation of powerful, pre-trained models to new tasks and domains without extensive retraining, leading to more flexible and robust AI systems across various applications.
Papers
October 15, 2024
August 29, 2024
August 7, 2024
July 23, 2024
July 5, 2024
June 24, 2024
March 5, 2024
December 26, 2023
February 20, 2023
February 15, 2023
December 19, 2022
December 1, 2022
October 14, 2022