Speech Analysis
Speech analysis is a rapidly evolving field focused on understanding and manipulating spoken language using computational methods, aiming to improve human-computer interaction and address challenges in healthcare and other domains. Current research emphasizes developing robust models, often based on transformer networks and neural codecs, for tasks such as speech recognition, emotion detection, and generation, including handling multi-speaker scenarios and low-resource languages. These advancements have significant implications for applications ranging from improved accessibility for individuals with speech impairments to more natural and intuitive interfaces for various technologies, as well as enabling new diagnostic tools in healthcare.
Papers
Exploring the encoding of linguistic representations in the Fully-Connected Layer of generative CNNs for Speech
Bruno Ferenc Šegedin, Gasper Beguš
Microphone Array Signal Processing and Deep Learning for Speech Enhancement
Reinhold Haeb-Umbach, Tomohiro Nakatani, Marc Delcroix, Christoph Boeddeker, Tsubasa Ochiai
Unsupervised Speech Segmentation: A General Approach Using Speech Language Models
Avishai Elmakies, Omri Abend, Yossi Adi
Effective and Efficient Mixed Precision Quantization of Speech Foundation Models
Haoning Xu, Zhaoqing Li, Zengrui Jin, Huimeng Wang, Youjun Chen, Guinan Li, Mengzhe Geng, Shujie Hu, Jiajun Deng, Xunying Liu
Tackling Cognitive Impairment Detection from Speech: A submission to the PROCESS Challenge
Catarina Botelho, David Gimeno-Gómez, Francisco Teixeira, John Mendonça, Patrícia Pereira, Diogo A.P. Nunes, Thomas Rolland, Anna Pompili, Rubén Solera-Ureña, Maria Ponte, David Martins de Matos, Carlos-D. Martínez-Hinarejos, Isabel Trancoso, Alberto Abad
Two-component spatiotemporal template for activation-inhibition of speech in ECoG
Eric Easthope
Depression and Anxiety Prediction Using Deep Language Models and Transfer Learning
Tomasz Rutowski, Elizabeth Shriberg, Amir Harati, Yang Lu, Piotr Chlebek, Ricardo Oliveira
A Multimodal Emotion Recognition System: Integrating Facial Expressions, Body Movement, Speech, and Spoken Language
Kris Kraack
VERSA: A Versatile Evaluation Toolkit for Speech, Audio, and Music
Jiatong Shi, Hye-jin Shim, Jinchuan Tian, Siddhant Arora, Haibin Wu, Darius Petermann, Jia Qi Yip, You Zhang, Yuxun Tang, Wangyou Zhang, Dareen Safar Alharthi, Yichen Huang, Koichi Saito, Jionghao Han, Yiwen Zhao, Chris Donahue, Shinji Watanabe
Temporal-Frequency State Space Duality: An Efficient Paradigm for Speech Emotion Recognition
Jiaqi Zhao, Fei Wang, Kun Li, Yanyan Wei, Shengeng Tang, Shu Zhao, Xiao Sun
A Multi-modal Approach to Dysarthria Detection and Severity Assessment Using Speech and Text Information
Anuprabha M, Krishna Gurugubelli, Kesavaraj V, Anil Kumar Vuppala