Speech Enhancement
Speech enhancement aims to improve the clarity and intelligibility of speech signals degraded by noise and reverberation, crucial for applications like hearing aids and voice assistants. Current research focuses on developing computationally efficient models, including lightweight convolutional neural networks, recurrent neural networks (like LSTMs), and diffusion models, often incorporating techniques like multi-channel processing, attention mechanisms, and self-supervised learning to achieve high performance with minimal latency. These advancements are driving progress towards more robust and resource-efficient speech enhancement systems for a wide range of real-world applications, particularly in low-power devices and challenging acoustic environments. The field also explores the integration of visual information and advanced signal processing techniques to further enhance performance.
Papers
Hierarchical Modeling of Spatial Cues via Spherical Harmonics for Multi-Channel Speech Enhancement
Jiahui Pan, Shulin He, Hui Zhang, Xueliang Zhang
PDPCRN: Parallel Dual-Path CRN with Bi-directional Inter-Branch Interactions for Multi-Channel Speech Enhancement
Jiahui Pan, Shulin He, Tianci Wu, Hui Zhang, Xueliang Zhang
AV2Wav: Diffusion-Based Re-synthesis from Continuous Self-supervised Features for Audio-Visual Speech Enhancement
Ju-Chieh Chou, Chung-Ming Chien, Karen Livescu
Diff-SV: A Unified Hierarchical Framework for Noise-Robust Speaker Verification Using Score-Based Diffusion Probabilistic Models
Ju-ho Kim, Jungwoo Heo, Hyun-seo Shin, Chan-yeong Lim, Ha-Jin Yu