Paper ID: 2202.02503
Adversarial Detector with Robust Classifier
Takayuki Osakabe, Maungmaung Aprilpyone, Sayaka Shiota, Hitoshi Kiya
Deep neural network (DNN) models are wellknown to easily misclassify prediction results by using input images with small perturbations, called adversarial examples. In this paper, we propose a novel adversarial detector, which consists of a robust classifier and a plain one, to highly detect adversarial examples. The proposed adversarial detector is carried out in accordance with the logits of plain and robust classifiers. In an experiment, the proposed detector is demonstrated to outperform a state-of-the-art detector without any robust classifier.
Submitted: Feb 5, 2022