LLM Generated Code
Large language models (LLMs) are increasingly used to generate code, aiming to automate software development and improve programmer productivity. Current research focuses on mitigating the significant security vulnerabilities and functional inaccuracies frequently found in LLM-generated code, exploring techniques like improved prompting strategies, fine-tuning with synthetic secure code datasets, and post-generation verification methods. This field is crucial because the widespread adoption of LLM-generated code necessitates robust methods for ensuring code quality, security, and compliance, impacting software development practices and potentially reshaping software engineering workflows.
Papers
November 15, 2024
November 3, 2024
October 28, 2024
October 14, 2024
September 27, 2024
September 10, 2024
September 2, 2024
August 23, 2024
August 17, 2024
August 9, 2024
August 6, 2024
August 5, 2024
July 31, 2024
July 9, 2024
June 16, 2024
June 11, 2024
June 10, 2024
May 24, 2024
May 22, 2024