Insecure Code
Insecure code generated by large language models (LLMs) is a significant concern in software development, prompting research focused on identifying and mitigating vulnerabilities. Current studies utilize various LLMs, including GPT-3.5, GPT-4, and Code Llama, to investigate the prevalence of insecure code generation, assessing their ability to both detect and repair vulnerabilities through techniques like iterative repair and trigger inversion. These findings are crucial for improving the security of software development processes and ensuring the reliable deployment of AI-assisted code generation tools.
Papers
November 13, 2024
October 14, 2024
August 20, 2024
August 16, 2024
August 8, 2024
April 29, 2024
March 22, 2024
March 19, 2024
November 1, 2023