Insecure Code

Insecure code generated by large language models (LLMs) is a significant concern in software development, prompting research focused on identifying and mitigating vulnerabilities. Current studies utilize various LLMs, including GPT-3.5, GPT-4, and Code Llama, to investigate the prevalence of insecure code generation, assessing their ability to both detect and repair vulnerabilities through techniques like iterative repair and trigger inversion. These findings are crucial for improving the security of software development processes and ensuring the reliable deployment of AI-assisted code generation tools.

Papers