Generative Attack

Generative attacks leverage generative models, such as GANs and VAEs, to create adversarial examples—inputs designed to mislead machine learning models, particularly image classifiers and person re-identification systems. Current research focuses on improving the transferability of these attacks across different models and datasets, often employing techniques like meta-learning, contrastive learning, and prompt engineering to enhance robustness and efficiency. This area is crucial for evaluating the security and robustness of deep learning systems, with implications for various applications including surveillance, autonomous driving, and privacy-preserving technologies.

Papers