Audio Attack
Audio attacks exploit vulnerabilities in audio processing systems, aiming to manipulate or deceive them through malicious audio modifications. Current research focuses on developing robust detection methods, often employing contrastive learning and advanced architectures like RawNet, and analyzing the effectiveness of various attack strategies, including noise injection and spectrum manipulation, against commercial voice control systems and automatic speech recognition. This field is crucial for enhancing the security of voice-controlled devices and AI systems that rely on audio input, impacting areas like cybersecurity, digital forensics, and the trustworthiness of AI-driven applications.
Papers
November 22, 2024
September 3, 2024
April 24, 2024
December 10, 2023
August 18, 2023
May 23, 2023