Code Security

Code security research focuses on mitigating vulnerabilities in code generated by large language models (LLMs), aiming to improve both the security and correctness of automatically produced software. Current research employs various techniques, including prompt engineering, fine-tuning LLMs with synthetic or curated datasets of secure and insecure code, and constrained decoding methods to guide code generation towards safer outputs. These advancements are crucial for enhancing software security and addressing the risks associated with increasingly prevalent AI-assisted code development, impacting both the reliability of software systems and the efficiency of software development processes.

Papers