Automatic Speech Recognition
Automatic Speech Recognition (ASR) aims to accurately transcribe spoken language into text, driving research into robust and efficient models. Current efforts focus on improving accuracy and robustness through techniques like consistency regularization in Connectionist Temporal Classification (CTC), leveraging pre-trained multilingual models for low-resource languages, and integrating Large Language Models (LLMs) for enhanced contextual understanding and improved handling of diverse accents and speech disorders. These advancements have significant implications for accessibility, enabling applications in diverse fields such as healthcare, education, and human-computer interaction.
Papers
Evaluation of Automated Speech Recognition Systems for Conversational Speech: A Linguistic Perspective
Hannaneh B. Pasandi, Haniyeh B. Pasandi
LAMASSU: Streaming Language-Agnostic Multilingual Speech Recognition and Translation Using Neural Transducers
Peidong Wang, Eric Sun, Jian Xue, Yu Wu, Long Zhou, Yashesh Gaur, Shujie Liu, Jinyu Li
Stutter-TTS: Controlled Synthesis and Improved Recognition of Stuttered Speech
Xin Zhang, Iván Vallés-Pérez, Andreas Stolcke, Chengzhu Yu, Jasha Droppo, Olabanji Shonibare, Roberto Barra-Chicote, Venkatesh Ravichandran
Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion
Zhouyuan Huo, Khe Chai Sim, Bo Li, Dongseong Hwang, Tara N. Sainath, Trevor Strohman
Biased Self-supervised learning for ASR
Florian L. Kreyssig, Yangyang Shi, Jinxi Guo, Leda Sari, Abdelrahman Mohamed, Philip C. Woodland
Probing Statistical Representations For End-To-End ASR
Anna Ollerenshaw, Md Asif Jalal, Thomas Hain
H_eval: A new hybrid evaluation metric for automatic speech recognition tasks
Zitha Sasindran, Harsha Yelchuri, T. V. Prabhakar, Supreeth Rao
Channel-Aware Pretraining of Joint Encoder-Decoder Self-Supervised Model for Telephonic-Speech ASR
Vrunda N. Sukhadia, A. Arunkumar, S. Umesh
Leveraging Domain Features for Detecting Adversarial Attacks Against Deep Speech Recognition in Noise
Christian Heider Nielsen, Zheng-Hua Tan
Phonetic-assisted Multi-Target Units Modeling for Improving Conformer-Transducer ASR system
Li Li, Dongxing Xu, Haoran Wei, Yanhua Long
Monolingual Recognizers Fusion for Code-switching Speech Recognition
Tongtong Song, Qiang Xu, Haoyu Lu, Longbiao Wang, Hao Shi, Yuqin Lin, Yanbing Yang, Jianwu Dang
Conversation-oriented ASR with multi-look-ahead CBS architecture
Huaibo Zhao, Shinya Fujie, Tetsuji Ogawa, Jin Sakuma, Yusuke Kida, Tetsunori Kobayashi
More Speaking or More Speakers?
Dan Berrebbi, Ronan Collobert, Navdeep Jaitly, Tatiana Likhomanenko
InterMPL: Momentum Pseudo-Labeling with Intermediate CTC Loss
Yosuke Higuchi, Tetsuji Ogawa, Tetsunori Kobayashi, Shinji Watanabe
Unified End-to-End Speech Recognition and Endpointing for Fast and Efficient Speech Systems
Shaan Bijwadia, Shuo-yiin Chang, Bo Li, Tara Sainath, Chao Zhang, Yanzhang He
Avoid Overthinking in Self-Supervised Models for Speech Recognition
Dan Berrebbi, Brian Yan, Shinji Watanabe
TrimTail: Low-Latency Streaming ASR with Simple but Effective Spectrogram-Level Length Penalty
Xingchen Song, Di Wu, Zhiyong Wu, Binbin Zhang, Yuekai Zhang, Zhendong Peng, Wenpeng Li, Fuping Pan, Changbao Zhu
A Comparative Study on Multichannel Speaker-Attributed Automatic Speech Recognition in Multi-party Meetings
Mohan Shi, Jie Zhang, Zhihao Du, Fan Yu, Qian Chen, Shiliang Zhang, Li-Rong Dai
Speech-text based multi-modal training with bidirectional attention for improved speech recognition
Yuhang Yang, Haihua Xu, Hao Huang, Eng Siong Chng, Sheng Li