Paper ID: 2302.05120

Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks

Piotr Gaiński, Klaudia Bałazy

We propose a novel gradient-based attack against transformer-based language models that searches for an adversarial example in a continuous space of token probabilities. Our algorithm mitigates the gap between adversarial loss for continuous and discrete text representations by performing multi-step quantization in a quantization-compensation loop. Experiments show that our method significantly outperforms other approaches on various natural language processing (NLP) tasks.

Submitted: Feb 10, 2023