Code Security
Code security research focuses on mitigating vulnerabilities in code generated by large language models (LLMs), aiming to improve both the security and correctness of automatically produced software. Current research employs various techniques, including prompt engineering, fine-tuning LLMs with synthetic or curated datasets of secure and insecure code, and constrained decoding methods to guide code generation towards safer outputs. These advancements are crucial for enhancing software security and addressing the risks associated with increasingly prevalent AI-assisted code development, impacting both the reliability of software systems and the efficiency of software development processes.
Papers
October 8, 2024
September 27, 2024
September 11, 2024
September 10, 2024
August 30, 2024
August 20, 2024
July 19, 2024
May 26, 2024
April 30, 2024
March 19, 2024
January 29, 2024
January 8, 2024
January 7, 2024
November 13, 2023
November 1, 2023
August 28, 2023
June 8, 2023