Code Generation
Code generation research focuses on using large language models (LLMs) to automatically produce functional and secure code from natural language descriptions or other inputs. Current efforts concentrate on improving the accuracy and efficiency of code generation, including developing novel training objectives like horizon-length prediction and employing techniques such as multi-agent frameworks, Monte Carlo Tree Search, and prompt engineering to guide LLMs towards better solutions. This field is significant because it promises to dramatically increase developer productivity and accelerate software development, while also raising important questions about code security and reliability that require further investigation.
Papers
The Fault in our Stars: Quality Assessment of Code Generation Benchmarks
Mohammed Latif Siddiq, Simantika Dristi, Joy Saha, Joanna C. S. Santos
Towards DNA-Encoded Library Generation with GFlowNets
Michał Koziarski, Mohammed Abukalam, Vedant Shah, Louis Vaillancourt, Doris Alexandra Schuetz, Moksh Jain, Almer van der Sloot, Mathieu Bourgey, Anne Marinier, Yoshua Bengio
MMCode: Benchmarking Multimodal Large Language Models for Code Generation with Visually Rich Programming Problems
Kaixin Li, Yuchen Tian, Qisheng Hu, Ziyang Luo, Zhiyong Huang, Jing Ma
BISCUIT: Scaffolding LLM-Generated Code with Ephemeral UIs in Computational Notebooks
Ruijia Cheng, Titus Barik, Alan Leung, Fred Hohman, Jeffrey Nichols
Analyzing the Performance of Large Language Models on Code Summarization
Rajarshi Haldar, Julia Hockenmaier
Register Your Forests: Decision Tree Ensemble Optimization by Explicit CPU Register Allocation
Daniel Biebert, Christian Hakert, Kuan-Hsun Chen, Jian-Jia Chen