Code Generation
Code generation research focuses on using large language models (LLMs) to automatically produce functional and secure code from natural language descriptions or other inputs. Current efforts concentrate on improving the accuracy and efficiency of code generation, including developing novel training objectives like horizon-length prediction and employing techniques such as multi-agent frameworks, Monte Carlo Tree Search, and prompt engineering to guide LLMs towards better solutions. This field is significant because it promises to dramatically increase developer productivity and accelerate software development, while also raising important questions about code security and reliability that require further investigation.
Papers
Granite Code Models: A Family of Open Foundation Models for Code Intelligence
Mayank Mishra, Matt Stallone, Gaoyuan Zhang, Yikang Shen, Aditya Prasad, Adriana Meza Soria, Michele Merler, Parameswaran Selvam, Saptha Surendran, Shivdeep Singh, Manish Sethi, Xuan-Hong Dang, Pengyuan Li, Kun-Lung Wu, Syed Zawad, Andrew Coleman, Matthew White, Mark Lewis, Raju Pavuluri, Yan Koyfman, Boris Lublinsky, Maximilien de Bayser, Ibrahim Abdelaziz, Kinjal Basu, Mayank Agarwal, Yi Zhou, Chris Johnson, Aanchal Goyal, Hima Patel, Yousaf Shah, Petros Zerfos, Heiko Ludwig, Asim Munawar, Maxwell Crouse, Pavan Kapanipathi, Shweta Salaria, Bob Calio, Sophia Wen, Seetharami Seelam, Brian Belgodere, Carlos Fonseca, Amith Singhee, Nirmit Desai, David D. Cox, Ruchir Puri, Rameswar Panda
Sketch Then Generate: Providing Incremental User Feedback and Guiding LLM Code Generation through Language-Oriented Code Sketches
Chen Zhu-Tian, Zeyu Xiong, Xiaoshuo Yao, Elena Glassman
Performance-Aligned LLMs for Generating Fast Code
Daniel Nichols, Pranav Polasam, Harshitha Menon, Aniruddha Marathe, Todd Gamblin, Abhinav Bhatele
PECC: Problem Extraction and Coding Challenges
Patrick Haller, Jonas Golde, Alan Akbik
Do Neutral Prompts Produce Insecure Code? FormAI-v2 Dataset: Labelling Vulnerabilities in Code Generated by Large Language Models
Norbert Tihanyi, Tamas Bisztray, Mohamed Amine Ferrag, Ridhi Jain, Lucas C. Cordeiro
Assessing GPT-4-Vision's Capabilities in UML-Based Code Generation
Gábor Antal, Richárd Vozár, Rudolf Ferenc
Class-Level Code Generation from Natural Language Using Iterative, Tool-Enhanced Reasoning over Repository
Ajinkya Deshpande, Anmol Agarwal, Shashank Shet, Arun Iyer, Aditya Kanade, Ramakrishna Bairi, Suresh Parthasarathy