Code Generation
Code generation research focuses on using large language models (LLMs) to automatically produce functional and secure code from natural language descriptions or other inputs. Current efforts concentrate on improving the accuracy and efficiency of code generation, including developing novel training objectives like horizon-length prediction and employing techniques such as multi-agent frameworks, Monte Carlo Tree Search, and prompt engineering to guide LLMs towards better solutions. This field is significant because it promises to dramatically increase developer productivity and accelerate software development, while also raising important questions about code security and reliability that require further investigation.
Papers
Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation
Jingchang Chen, Hongxuan Tang, Zheng Chu, Qianglong Chen, Zekun Wang, Ming Liu, Bing Qin
From Symbolic Tasks to Code Generation: Diversification Yields Better Task Performers
Dylan Zhang, Justin Wang, Francois Charton
AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data
Zifan Song, Yudong Wang, Wenwei Zhang, Kuikun Liu, Chengqi Lyu, Demin Song, Qipeng Guo, Hang Yan, Dahua Lin, Kai Chen, Cairong Zhao
Large Language Models for Code Summarization
Balázs Szalontai, Gergő Szalay, Tamás Márton, Anna Sike, Balázs Pintér, Tibor Gregorics
Model Cascading for Code: Reducing Inference Costs with Model Cascading for LLM Based Code Generation
Boyuan Chen, Mingzhi Zhu, Brendan Dolan-Gavitt, Muhammad Shafique, Siddharth Garg
ChatGPT Code Detection: Techniques for Uncovering the Source of Code
Marc Oedingen, Raphael C. Engelhardt, Robin Denz, Maximilian Hammer, Wolfgang Konen
EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization
Dong Huang, Jianbo Dai, Han Weng, Puzhen Wu, Yuhao Qing, Heming Cui, Zhijiang Guo, Jie M.Zhang