Dysarthric Speech
Dysarthric speech, characterized by impaired articulation due to neurological conditions, presents a significant challenge for automatic speech recognition (ASR) and related applications. Current research focuses on developing robust ASR systems for dysarthric speech using techniques like self-supervised learning (e.g., HuBERT, wav2vec 2.0), prototype-based adaptation, and generative adversarial networks (GANs) to address data scarcity and inter-speaker variability. These advancements aim to improve speech recognition accuracy, intelligibility assessment, and even severity classification, ultimately enhancing communication and quality of life for individuals with dysarthria.
Papers
January 27, 2022
January 24, 2022
January 15, 2022
January 14, 2022
January 13, 2022
December 21, 2021