Style PROMPT
Style Prompting research focuses on improving the performance and efficiency of large language models (LLMs) and other AI models by carefully crafting input prompts. Current research explores techniques like prompt tuning (adapting model behavior without full retraining), prompt baking (integrating prompts into model weights), and the use of multi-representation prompts to enhance model understanding and generalization across diverse tasks and modalities (e.g., vision-language models). This field is significant because effective prompting can drastically improve model performance, reduce computational costs, and mitigate biases, leading to more robust and reliable AI systems across various applications, including image processing, text generation, and even urban planning simulations.
Papers
One Category One Prompt: Dataset Distillation using Diffusion Models
Ali Abbasi, Ashkan Shahbazi, Hamed Pirsiavash, Soheil Kolouri
DiPrompT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning
Sikai Bai, Jie Zhang, Shuaicheng Li, Song Guo, Jingcai Guo, Jun Hou, Tao Han, Xiaocheng Lu
ConstitutionalExperts: Training a Mixture of Principle-based Prompts
Savvas Petridis, Ben Wedin, Ann Yuan, James Wexler, Nithum Thain
An Item is Worth a Prompt: Versatile Image Editing with Disentangled Control
Aosong Feng, Weikang Qiu, Jinbin Bai, Xiao Zhang, Zhen Dong, Kaicheng Zhou, Rex Ying, Leandros Tassiulas