Adversarial Audio
Adversarial audio research focuses on creating subtly altered audio files—imperceptible to humans but designed to fool automatic speech recognition (ASR) and speaker verification (SV) systems. Current research explores various attack methods, including those leveraging neural networks (e.g., Hammerstein models) to generate perturbations, linguistic features to manipulate transcriptions efficiently, and techniques to create universal adversarial segments affecting multiple models. This field is crucial for understanding and mitigating security vulnerabilities in voice-controlled devices and biometric systems, impacting the development of robust and trustworthy AI applications.
Papers
August 17, 2024
August 3, 2024
July 5, 2024
May 9, 2024
October 5, 2023
September 13, 2023
September 2, 2023
August 18, 2023
May 26, 2023
April 20, 2023
April 18, 2023
March 28, 2023
March 2, 2023
November 17, 2022
October 21, 2022
September 21, 2022
July 26, 2022
June 7, 2022
March 18, 2022