Speaker Verification
Speaker verification (SV) aims to automatically authenticate a person's identity based on their voice, focusing on creating robust and accurate systems. Current research emphasizes improving the discriminative power of speaker embeddings through techniques like contrastive learning, disentangling confounding factors such as age and channel variations, and leveraging powerful pre-trained models such as WavLM and Whisper. These advancements are crucial for enhancing security in various applications, from access control to forensic investigations, and are driving ongoing efforts to improve robustness against spoofing attacks and noisy conditions.
Papers
Speaker-IPL: Unsupervised Learning of Speaker Characteristics with i-Vector based Pseudo-Labels
Zakaria Aldeneh, Takuya Higuchi, Jee-weon Jung, Li-Wei Chen, Stephen Shum, Ahmed Hussen Abdelaziz, Shinji Watanabe, Tatiana Likhomanenko, Barry-John Theobald
Speaker Contrastive Learning for Source Speaker Tracing
Qing Wang, Hongmei Guo, Jian Kang, Mengjie Du, Jie Li, Xiao-Lei Zhang, Lei Xie
Whisper-PMFA: Partial Multi-Scale Feature Aggregation for Speaker Verification using Whisper Models
Yiyang Zhao, Shuai Wang, Guangzhi Sun, Zehua Chen, Chao Zhang, Mingxing Xu, Thomas Fang Zheng
EmoAttack: Utilizing Emotional Voice Conversion for Speech Backdoor Attacks on Deep Speech Classification Models
Wenhan Yao, Zedong XingXiarun Chen, Jia Liu, yongqiang He, Weiping Wen