Audio Adversarial Example

Audio adversarial examples are subtly manipulated audio files designed to fool automatic speech recognition (ASR) systems while remaining perceptually indistinguishable from normal audio to humans. Current research focuses on developing more robust and imperceptible adversarial examples, as well as creating effective defenses against these attacks, often employing techniques like psychoacoustic modeling, diffusion models, and analysis of query patterns during adversarial example generation. This field is crucial for ensuring the security and reliability of ASR systems in various applications, ranging from voice assistants to security systems, by addressing vulnerabilities to malicious audio manipulation.

Papers