LLM Generated Code
Large language models (LLMs) are increasingly used to generate code, aiming to automate software development and improve programmer productivity. Current research focuses on mitigating the significant security vulnerabilities and functional inaccuracies frequently found in LLM-generated code, exploring techniques like improved prompting strategies, fine-tuning with synthetic secure code datasets, and post-generation verification methods. This field is crucial because the widespread adoption of LLM-generated code necessitates robust methods for ensuring code quality, security, and compliance, impacting software development practices and potentially reshaping software engineering workflows.
Papers
April 10, 2024
April 1, 2024
March 31, 2024
March 25, 2024
March 13, 2024
March 8, 2024
February 18, 2024
January 31, 2024
November 1, 2023
October 19, 2023
October 10, 2023
October 1, 2023
July 31, 2023
May 25, 2023
May 17, 2023
May 2, 2023