Speech Enhancement
Speech enhancement aims to improve the clarity and intelligibility of speech signals degraded by noise and reverberation, crucial for applications like hearing aids and voice assistants. Current research focuses on developing computationally efficient models, including lightweight convolutional neural networks, recurrent neural networks (like LSTMs), and diffusion models, often incorporating techniques like multi-channel processing, attention mechanisms, and self-supervised learning to achieve high performance with minimal latency. These advancements are driving progress towards more robust and resource-efficient speech enhancement systems for a wide range of real-world applications, particularly in low-power devices and challenging acoustic environments. The field also explores the integration of visual information and advanced signal processing techniques to further enhance performance.
Papers
Perceptual Contrast Stretching on Target Feature for Speech Enhancement
Rong Chao, Cheng Yu, Szu-Wei Fu, Xugang Lu, Yu Tsao
Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain
Simon Welker, Julius Richter, Timo Gerkmann
Effective data screening technique for crowdsourced speech intelligibility experiments: Evaluation with IRM-based speech enhancement
Ayako Yamamoto, Toshio Irino, Shoko Araki, Kenichi Arai, Atsunori Ogawa, Keisuke Kinoshita, Tomohiro Nakatani
SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping
Yuma Koizumi, Heiga Zen, Kohei Yatabe, Nanxin Chen, Michiel Bacchiani