Adversarial Patch Generation

Adversarial patch generation focuses on creating small, visually inconspicuous images that can fool object detection systems, causing them to misclassify or miss objects. Current research emphasizes generating naturalistic patches using generative models like diffusion models and GANs, and improving the patches' effectiveness across multiple object detection models and diverse visual tasks, including those involving visual reasoning. This research is significant because it highlights vulnerabilities in object detection systems and has implications for security, privacy, and the broader trustworthiness of AI systems in real-world applications.

Papers