Paper ID: 2210.05373 • Published Oct 11, 2022

Stable and Efficient Adversarial Training through Local Linearization

Zhuorong Li, Daiwei Yu
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
There has been a recent surge in single-step adversarial training as it shows robustness and efficiency. However, a phenomenon referred to as ``catastrophic overfitting" has been observed, which is prevalent in single-step defenses and may frustrate attempts to use FGSM adversarial training. To address this issue, we propose a novel method, Stable and Efficient Adversarial Training (SEAT), which mitigates catastrophic overfitting by harnessing on local properties that distinguish a robust model from that of a catastrophic overfitted model. The proposed SEAT has strong theoretical justifications, in that minimizing the SEAT loss can be shown to favour smooth empirical risk, thereby leading to robustness. Experimental results demonstrate that the proposed method successfully mitigates catastrophic overfitting, yielding superior performance amongst efficient defenses. Our single-step method can reach 51% robust accuracy for CIFAR-10 with l_\infty perturbations of radius 8/255 under a strong PGD-50 attack, matching the performance of a 10-step iterative adversarial training at merely 3% computational cost.