Speech Model
Speech models aim to represent and process spoken language computationally, enabling applications like automatic speech recognition (ASR) and text-to-speech (TTS). Current research emphasizes improving model robustness (e.g., to noise and accents), fairness (mitigating biases against marginalized language varieties), and efficiency (through techniques like knowledge distillation and low-rank adaptation), often utilizing transformer-based architectures and self-supervised learning. These advancements have significant implications for various fields, including healthcare (e.g., voice disorder detection, mental health assessment), language preservation, and human-computer interaction.
Papers
Self-Supervised Speech Representations are More Phonetic than Semantic
Kwanghee Choi, Ankita Pasad, Tomohiko Nakamura, Satoru Fukayama, Karen Livescu, Shinji Watanabe
Exploring Self-Supervised Multi-view Contrastive Learning for Speech Emotion Recognition with Limited Annotations
Bulat Khaertdinov, Pedro Jeuris, Annanda Sousa, Enrique Hortal
Integrating Self-supervised Speech Model with Pseudo Word-level Targets from Visually-grounded Speech Model
Hung-Chieh Fang, Nai-Xuan Ye, Yi-Jen Shih, Puyuan Peng, Hsuan-Fu Wang, Layne Berry, Hung-yi Lee, David Harwath
Spirit LM: Interleaved Spoken and Written Language Model
Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R. Costa-jussa, Maha Elbayad, Sravya Popuri, Christophe Ropers, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, Itai Gat, Mary Williamson, Gabriel Synnaeve, Juan Pino, Benoit Sagot, Emmanuel Dupoux