Code Generation
Code generation research focuses on using large language models (LLMs) to automatically produce functional and secure code from natural language descriptions or other inputs. Current efforts concentrate on improving the accuracy and efficiency of code generation, including developing novel training objectives like horizon-length prediction and employing techniques such as multi-agent frameworks, Monte Carlo Tree Search, and prompt engineering to guide LLMs towards better solutions. This field is significant because it promises to dramatically increase developer productivity and accelerate software development, while also raising important questions about code security and reliability that require further investigation.
Papers
Uncovering and Quantifying Social Biases in Code Generation
Yan Liu, Xiaokang Chen, Yan Gao, Zhe Su, Fengji Zhang, Daoguang Zan, Jian-Guang Lou, Pin-Yu Chen, Tsung-Yi Ho
Who Wrote this Code? Watermarking for Code Generation
Taehyun Lee, Seokhee Hong, Jaewoo Ahn, Ilgee Hong, Hwaran Lee, Sangdoo Yun, Jamin Shin, Gunhee Kim
ByteSized32: A Corpus and Challenge Task for Generating Task-Specific World Models Expressed as Text Games
Ruoyao Wang, Graham Todd, Eric Yuan, Ziang Xiao, Marc-Alexandre Côté, Peter Jansen
From Words to Wires: Generating Functioning Electronic Devices from Natural Language Descriptions
Peter Jansen
Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning and Coding with LLMs
Pranjal Aggarwal, Aman Madaan, Yiming Yang, Mausam
Towards Code Generation from BDD Test Case Specifications: A Vision
Leon Chemnitz, David Reichenbach, Hani Aldebes, Mariam Naveed, Krishna Narasimhan, Mira Mezini