Paper ID: 2405.08965
MTLLM: LLMs are Meaning-Typed Code Constructs
Jason Mars, Yiping Kang, Jayanaka L. Dantanarayana, Chandra Irugalbandara, Kugesan Sivasothynathan, Christopher Clarke, Baichuan Li, Lingjia Tang
Programming with Generative AI (GenAI) models, which frequently involves using large language models (LLMs) to accomplish specific functionalities, has experienced significant growth in adoption. However, it remains a complex process, as developers often need to manually configure text inputs for LLMs, a practice known as prompt engineering, and subsequently translate the natural language outputs produced by LLMs back into symbolic code representations (values, types, etc.) that the code can understand. Although some infrastructures are proposed to facilitate prompt engineering, these tools are often complex and challenging for developers to adopt. Instead, this paper presents a simplified approach to integrating LLMs into programming through the introduction of an abstraction layer that hides the complexity of gluing traditional programming and LLMs together. Our approach utilizes the semantic richness in existing programs to automatically translate between the traditional programming languages and the natural language understood by LLMs, eliminating developer efforts such as prompt engineering, decreasing the overall complexity. Specifically in this paper, we design three novel code constructs coupled with an automated runtime management system that bridges the gap between traditional symbolic code and LLMs. We present a fully functional and production-grade implementation for our approach and compare it to SOTA LLM software development tools. We present real-world case studies demonstrating the efficacy of our proposed abstraction that seamlessly utilizes LLMs to solve problems in place of potentially complex traditional programming logic.
Submitted: May 14, 2024