Adversarial Code

Adversarial code research focuses on creating subtly altered code snippets—imperceptible to humans but capable of fooling machine learning models trained on code analysis or generation tasks. Current research emphasizes developing effective attack methods, often leveraging transformer-based models and generative adversarial networks (GANs), to explore vulnerabilities in pre-trained models of code (PTMCs) across various applications like code authorship attribution, binary code similarity detection, and code comprehension. This work is crucial for assessing the robustness and security of increasingly prevalent AI-powered code analysis tools and for developing more resilient systems against malicious code manipulation.

Papers