Prompt Engineering
Prompt engineering is the art and science of crafting effective instructions—prompts—to guide large language models (LLMs) towards desired outputs. Current research focuses on developing automated methods for prompt optimization, exploring techniques like chain-of-thought prompting, and adapting prompts to specific LLMs and tasks (e.g., code generation, question answering, medical image analysis). This field is significant because effective prompt engineering dramatically improves the accuracy, efficiency, and reliability of LLMs across diverse applications, ranging from healthcare and education to software development and scientific research.
Papers
Adaptive In-conversation Team Building for Language Model Agents
Linxin Song, Jiale Liu, Jieyu Zhang, Shaokun Zhang, Ao Luo, Shijian Wang, Qingyun Wu, Chi Wang
Can Graph Learning Improve Planning in LLM-based Agents?
Xixi Wu, Yifei Shen, Caihua Shan, Kaitao Song, Siwei Wang, Bohang Zhang, Jiarui Feng, Hong Cheng, Wei Chen, Yun Xiong, Dongsheng Li
Towards A Human-in-the-Loop LLM Approach to Collaborative Discourse Analysis
Clayton Cohn, Caitlin Snyder, Justin Montenegro, Gautam Biswas
Prompting Task Trees using Gemini: Methodologies and Insights
Pallavi Tandra
FOKE: A Personalized and Explainable Education Framework Integrating Foundation Models, Knowledge Graphs, and Prompt Engineering
Silan Hu, Xiaoning Wang