Code Security
Code security research focuses on mitigating vulnerabilities in code generated by large language models (LLMs), aiming to improve both the security and correctness of automatically produced software. Current research employs various techniques, including prompt engineering, fine-tuning LLMs with synthetic or curated datasets of secure and insecure code, and constrained decoding methods to guide code generation towards safer outputs. These advancements are crucial for enhancing software security and addressing the risks associated with increasingly prevalent AI-assisted code development, impacting both the reliability of software systems and the efficiency of software development processes.
18papers
Papers
May 15, 2025
April 28, 2025
February 9, 2025
December 19, 2024
October 8, 2024
September 27, 2024
April 30, 2024
March 19, 2024
January 29, 2024
January 7, 2024
November 13, 2023
November 1, 2023