Paper ID: 2306.01809
Adversarial Attack Based on Prediction-Correction
Chen Wan, Fangjun Huang
Deep neural networks (DNNs) are vulnerable to adversarial examples obtained by adding small perturbations to original examples. The added perturbations in existing attacks are mainly determined by the gradient of the loss function with respect to the inputs. In this paper, the close relationship between gradient-based attacks and the numerical methods for solving ordinary differential equation (ODE) is studied for the first time. Inspired by the numerical solution of ODE, a new prediction-correction (PC) based adversarial attack is proposed. In our proposed PC-based attack, some existing attack can be selected to produce a predicted example first, and then the predicted example and the current example are combined together to determine the added perturbations. The proposed method possesses good extensibility and can be applied to all available gradient-based attacks easily. Extensive experiments demonstrate that compared with the state-of-the-art gradient-based adversarial attacks, our proposed PC-based attacks have higher attack success rates, and exhibit better transferability.
Submitted: Jun 2, 2023