Complex Prompt
Complex prompt engineering focuses on optimizing the input instructions given to large language models (LLMs) to elicit desired outputs, improving performance and control over these powerful tools. Current research explores various prompting techniques, including multi-step prompting, prefix-tuning, and reinforcement learning-based optimization, often applied to models like GPT and Llama series, to enhance LLM capabilities in diverse tasks such as text generation, image creation, and question answering. This field is significant because effective prompt engineering is crucial for unlocking the full potential of LLMs and mitigating their limitations, impacting various applications from software development to scientific research and beyond.
Papers
CSPS: A Communication-Efficient Sequence-Parallelism based Serving System for Transformer based Models with Long Prompts
Zeyu Zhang, Haiying Shen
From Commands to Prompts: LLM-based Semantic File System for AIOS
Zeru Shi, Kai Mei, Mingyu Jin, Yongye Su, Chaoji Zuo, Wenyue Hua, Wujiang Xu, Yujie Ren, Zirui Liu, Mengnan Du, Dong Deng, Yongfeng Zhang
Reprojection Errors as Prompts for Efficient Scene Coordinate Regression
Ting-Ru Liu, Hsuan-Kung Yang, Jou-Min Liu, Chun-Wei Huang, Tsung-Chih Chiang, Quan Kong, Norimasa Kobori, Chun-Yi Lee
Automating Robot Failure Recovery Using Vision-Language Models With Optimized Prompts
Hongyi Chen, Yunchao Yao, Ruixuan Liu, Changliu Liu, Jeffrey Ichnowski
Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models
Ze Yu Zhang, Arun Verma, Finale Doshi-Velez, Bryan Kian Hsiang Low
Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL
Yunseon Choi, Sangmin Bae, Seonghyun Ban, Minchan Jeong, Chuheng Zhang, Lei Song, Li Zhao, Jiang Bian, Kee-Eung Kim