Speaker Verification
Speaker verification (SV) aims to automatically authenticate a person's identity based on their voice, focusing on creating robust and accurate systems. Current research emphasizes improving the discriminative power of speaker embeddings through techniques like contrastive learning, disentangling confounding factors such as age and channel variations, and leveraging powerful pre-trained models such as WavLM and Whisper. These advancements are crucial for enhancing security in various applications, from access control to forensic investigations, and are driving ongoing efforts to improve robustness against spoofing attacks and noisy conditions.
Papers
VoiceMe: Personalized voice generation in TTS
Pol van Rijn, Silvan Mertes, Dominik Schiller, Piotr Dura, Hubert Siuzdak, Peter M. C. Harrison, Elisabeth André, Nori Jacoby
Decomposed Temporal Dynamic CNN: Efficient Time-Adaptive Network for Text-Independent Speaker Verification Explained with Speaker Activation Map
Seong-Hu Kim, Hyeonuk Nam, Yong-Hwa Park
NeuraGen-A Low-Resource Neural Network based approach for Gender Classification
Shankhanil Ghosh, Chhanda Saha, Naagamani Molakathaala
MFA-Conformer: Multi-scale Feature Aggregation Conformer for Automatic Speaker Verification
Yang Zhang, Zhiqiang Lv, Haibin Wu, Shanshan Zhang, Pengfei Hu, Zhiyong Wu, Hung-yi Lee, Helen Meng
Investigation of Different Calibration Methods for Deep Speaker Embedding based Verification Systems
Galina Lavrentyeva, Sergey Novoselov, Andrey Shulipa, Marina Volkova, Aleksandr Kozlov
Robust Speaker Recognition with Transformers Using wav2vec 2.0
Sergey Novoselov, Galina Lavrentyeva, Anastasia Avdeeva, Vladimir Volokhov, Aleksei Gusev