Style PROMPT
Style Prompting research focuses on improving the performance and efficiency of large language models (LLMs) and other AI models by carefully crafting input prompts. Current research explores techniques like prompt tuning (adapting model behavior without full retraining), prompt baking (integrating prompts into model weights), and the use of multi-representation prompts to enhance model understanding and generalization across diverse tasks and modalities (e.g., vision-language models). This field is significant because effective prompting can drastically improve model performance, reduce computational costs, and mitigate biases, leading to more robust and reliable AI systems across various applications, including image processing, text generation, and even urban planning simulations.
Papers
The Prompt Canvas: A Literature-Based Practitioner Guide for Creating Effective Prompts in Large Language Models
Michael Hewing, Vincent Leinhos
Steps are all you need: Rethinking STEM Education with Prompt Engineering
Krishnasai Addala, Kabir Dev Paul Baghel, Chhavi Kirtani, Avinash Anand, Rajiv Ratn Shah
CoCoP: Enhancing Text Classification with LLM through Code Completion Prompt
Mohammad Mahdi Mohajeri, Mohammad Javad Dousti, Majid Nili Ahmadabadi
MambaXCTrack: Mamba-based Tracker with SSM Cross-correlation and Motion Prompt for Ultrasound Needle Tracking
Yuelin Zhang, Qingpeng Ding, Long Lei, Jiwei Shan, Wenxuan Xie, Tianyi Zhang, Wanquan Yan, Raymond Shing-Yan Tang, Shing Shin Cheng