Paper ID: 2409.01813
Reassessing Noise Augmentation Methods in the Context of Adversarial Speech
Karla Pizzi, Matías P. Pizarro B, Asja Fischer
In this study, we investigate if noise-augmented training can concurrently improve adversarial robustness in automatic speech recognition (ASR) systems. We conduct a comparative analysis of the adversarial robustness of four different state-of-the-art ASR architectures, where each of the ASR architectures is trained under three different augmentation conditions: one subject to background noise, speed variations, and reverberations, another subject to speed variations only, and a third without any form of data augmentation. The results demonstrate that noise augmentation not only improves model performance on noisy speech but also the model's robustness to adversarial attacks.
Submitted: Sep 3, 2024