Past Present Temporal Program
Past-present temporal programming focuses on using large language models (LLMs) to generate and optimize programs, particularly in scenarios involving complex reasoning or interactions with external tools. Current research emphasizes developing efficient methods for prompt engineering, including novel programming languages and frameworks designed to bridge the gap between traditional programming and LLM-based approaches, as well as techniques for verifying the correctness and mitigating hallucinations in LLM-generated code. This field is significant because it promises to accelerate software development, improve the reliability of AI systems, and enhance the capabilities of AI in various domains, including healthcare and scientific computing.
Papers
Can LLMs Reason in the Wild with Programs?
Yuan Yang, Siheng Xiong, Ali Payani, Ehsan Shareghi, Faramarz Fekri
APPL: A Prompt Programming Language for Harmonious Integration of Programs and Large Language Model Prompts
Honghua Dong, Qidong Su, Yubo Gao, Zhaoyu Li, Yangjun Ruan, Gennady Pekhimenko, Chris J. Maddison, Xujie Si