Style PROMPT
Style Prompting research focuses on improving the performance and efficiency of large language models (LLMs) and other AI models by carefully crafting input prompts. Current research explores techniques like prompt tuning (adapting model behavior without full retraining), prompt baking (integrating prompts into model weights), and the use of multi-representation prompts to enhance model understanding and generalization across diverse tasks and modalities (e.g., vision-language models). This field is significant because effective prompting can drastically improve model performance, reduce computational costs, and mitigate biases, leading to more robust and reliable AI systems across various applications, including image processing, text generation, and even urban planning simulations.
Papers
The Prompt Report: A Systematic Survey of Prompting Techniques
Sander Schulhoff, Michael Ilie, Nishant Balepur, Konstantine Kahadze, Amanda Liu, Chenglei Si, Yinheng Li, Aayush Gupta, HyoJung Han, Sevien Schulhoff, Pranav Sandeep Dulepet, Saurav Vidyadhara, Dayeon Ki, Sweta Agrawal, Chau Pham, Gerson Kroiz, Feileen Li, Hudson Tao, Ashay Srivastava, Hevander Da Costa, Saloni Gupta, Megan L. Rogers, Inna Goncearenco, Giuseppe Sarli, Igor Galynker, Denis Peskoff, Marine Carpuat, Jules White, Shyamal Anadkat, Alexander Hoyle, Philip Resnik
Retrieval Augmented Generation in Prompt-based Text-to-Speech Synthesis with Context-Aware Contrastive Language-Audio Pretraining
Jinlong Xue, Yayue Deng, Yingming Gao, Ya Li