Paper ID: 2501.12516 • Published Jan 21, 2025
Robustness of Selected Learning Models under Label-Flipping Attack
Sarvagya Bhargava, Mark Stamp
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
In this paper we compare traditional machine learning and deep learning
models trained on a malware dataset when subjected to adversarial attack based
on label-flipping. Specifically, we investigate the robustness of Support
Vector Machines (SVM), Random Forest, Gaussian Naive Bayes (GNB), Gradient
Boosting Machine (GBM), LightGBM, XGBoost, Multilayer Perceptron (MLP),
Convolutional Neural Network (CNN), MobileNet, and DenseNet models when facing
varying percentages of misleading labels. We empirically assess the the
accuracy of each of these models under such an adversarial attack on the
training data. This research aims to provide insights into which models are
inherently more robust, in the sense of being better able to resist intentional
disruptions to the training data. We find wide variation in the robustness of
the models tested to adversarial attack, with our MLP model achieving the best
combination of initial accuracy and robustness.