Style PROMPT
Style Prompting research focuses on improving the performance and efficiency of large language models (LLMs) and other AI models by carefully crafting input prompts. Current research explores techniques like prompt tuning (adapting model behavior without full retraining), prompt baking (integrating prompts into model weights), and the use of multi-representation prompts to enhance model understanding and generalization across diverse tasks and modalities (e.g., vision-language models). This field is significant because effective prompting can drastically improve model performance, reduce computational costs, and mitigate biases, leading to more robust and reliable AI systems across various applications, including image processing, text generation, and even urban planning simulations.
Papers
RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation based on Visual Foundation Model
Keyan Chen, Chenyang Liu, Hao Chen, Haotian Zhang, Wenyuan Li, Zhengxia Zou, Zhenwei Shi
Understanding Prompt Tuning for V-L Models Through the Lens of Neural Collapse
Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Yunfeng Shao, Jiashuo Liu, Kun Kuang, Yinchuan Li, Chao Wu
A Scalable and Adaptive System to Infer the Industry Sectors of Companies: Prompt + Model Tuning of Generative Language Models
Lele Cao, Vilhelm von Ehrenheim, Astrid Berghult, Cecilia Henje, Richard Anselmo Stahl, Joar Wandborg, Sebastian Stan, Armin Catovic, Erik Ferm, Hannes Ingelhag
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models
Fengzhu Zeng, Wei Gao