Prompt Based
Prompt-based techniques are revolutionizing how large language models (LLMs) are used, focusing on crafting effective input prompts to guide model behavior for various tasks, from text classification and code generation to image analysis and robotic control. Current research emphasizes optimizing prompt design, exploring different prompt architectures (e.g., chain-of-thought prompting, multi-prompting), and developing methods to mitigate vulnerabilities like prompt injection attacks and privacy concerns. This approach offers significant advantages in data efficiency and model adaptability, impacting diverse fields by enabling zero-shot learning, improving model safety and robustness, and facilitating more efficient and effective use of LLMs in various applications.
Papers
Zero-shot prompt-based classification: topic labeling in times of foundation models in German Tweets
Simon Münker, Kai Kugler, Achim Rettinger
Human-Free Automated Prompting for Vision-Language Anomaly Detection: Prompt Optimization with Meta-guiding Prompt Scheme
Pi-Wei Chen, Jerry Chun-Wei Lin, Jia Ji, Feng-Hao Yeh, Zih-Ching Chen, Chao-Chun Chen
MultiAgent Collaboration Attack: Investigating Adversarial Attacks in Large Language Model Collaborations via Debate
Alfonso Amayuelas, Xianjun Yang, Antonis Antoniades, Wenyue Hua, Liangming Pan, William Wang
The Fire Thief Is Also the Keeper: Balancing Usability and Privacy in Prompts
Zhili Shen, Zihang Xi, Ying He, Wei Tong, Jingyu Hua, Sheng Zhong