Automatic Speech Recognition
Automatic Speech Recognition (ASR) aims to accurately transcribe spoken language into text, driving research into robust and efficient models. Current efforts focus on improving accuracy and robustness through techniques like consistency regularization in Connectionist Temporal Classification (CTC), leveraging pre-trained multilingual models for low-resource languages, and integrating Large Language Models (LLMs) for enhanced contextual understanding and improved handling of diverse accents and speech disorders. These advancements have significant implications for accessibility, enabling applications in diverse fields such as healthcare, education, and human-computer interaction.
Papers
Development of Hybrid ASR Systems for Low Resource Medical Domain Conversational Telephone Speech
Christoph Lüscher, Mohammad Zeineldeen, Zijian Yang, Tina Raissi, Peter Vieting, Khai Le-Duc, Weiyue Wang, Ralf Schlüter, Hermann Ney
Investigating self-supervised, weakly supervised and fully supervised training approaches for multi-domain automatic speech recognition: a study on Bangladeshi Bangla
Ahnaf Mozib Samin, M. Humayon Kobir, Md. Mushtaq Shahriyar Rafee, M. Firoz Ahmed, Mehedi Hasan, Partha Ghosh, Shafkat Kibria, M. Shahidur Rahman
G-Augment: Searching for the Meta-Structure of Data Augmentation Policies for ASR
Gary Wang, Ekin D. Cubuk, Andrew Rosenberg, Shuyang Cheng, Ron J. Weiss, Bhuvana Ramabhadran, Pedro J. Moreno, Quoc V. Le, Daniel S. Park
End-to-End Integration of Speech Recognition, Dereverberation, Beamforming, and Self-Supervised Learning Representation
Yoshiki Masuyama, Xuankai Chang, Samuele Cornell, Shinji Watanabe, Nobutaka Ono
Maestro-U: Leveraging joint speech-text representation learning for zero supervised speech ASR
Zhehuai Chen, Ankur Bapna, Andrew Rosenberg, Yu Zhang, Bhuvana Ramabhadran, Pedro Moreno, Nanxin Chen
HMM vs. CTC for Automatic Speech Recognition: Comparison Based on Full-Sum Training from Scratch
Tina Raissi, Wei Zhou, Simon Berger, Ralf Schlüter, Hermann Ney
Comparison of Soft and Hard Target RNN-T Distillation for Large-scale ASR
Dongseong Hwang, Khe Chai Sim, Yu Zhang, Trevor Strohman
Streaming Punctuation for Long-form Dictation with Transformers
Piyush Behre, Sharman Tan, Padma Varadharajan, Shuangyu Chang
Automatic Speech Recognition of Low-Resource Languages Based on Chukchi
Anastasia Safonova, Tatiana Yudina, Emil Nadimanov, Cydnie Davenport