Paper ID: 2206.06256
On the impact of dataset size and class imbalance in evaluating machine-learning-based windows malware detection techniques
David Illes
The purpose of this project was to collect and analyse data about the comparability and real-life applicability of published results focusing on Microsoft Windows malware, more specifically the impact of dataset size and testing dataset imbalance on measured detector performance. Some researchers use smaller datasets, and if dataset size has a significant impact on performance, that makes comparison of the published results difficult. Researchers also tend to use balanced datasets and accuracy as a metric for testing. The former is not a true representation of reality, where benign samples significantly outnumber malware, and the latter is approach is known to be problematic for imbalanced problems. The project identified two key objectives, to understand if dataset size correlates to measured detector performance to an extent that prevents meaningful comparison of published results, and to understand if good performance reported in published research can be expected to perform well in a real-world deployment scenario. The research's results suggested that dataset size does correlate with measured detector performance to an extent that prevents meaningful comparison of published results, and without understanding the nature of the training set size-accuracy curve for published results conclusions between approaches on which approach is "better" shouldn't be made solely based on accuracy scores. Results also suggested that high accuracy scores don't necessarily translate to high real-world performance.
Submitted: Jun 13, 2022