Adversarial Code
Adversarial code research focuses on creating subtly altered code snippets—imperceptible to humans but capable of fooling machine learning models trained on code analysis or generation tasks. Current research emphasizes developing effective attack methods, often leveraging transformer-based models and generative adversarial networks (GANs), to explore vulnerabilities in pre-trained models of code (PTMCs) across various applications like code authorship attribution, binary code similarity detection, and code comprehension. This work is crucial for assessing the robustness and security of increasingly prevalent AI-powered code analysis tools and for developing more resilient systems against malicious code manipulation.
Papers
November 27, 2023
November 13, 2023
July 3, 2023
June 22, 2023
June 16, 2023
April 26, 2023
November 21, 2022
September 12, 2022
August 26, 2022
May 31, 2022