ASR Model
Automatic speech recognition (ASR) models aim to accurately transcribe spoken language into text, a task crucial for numerous applications. Current research emphasizes improving model robustness across diverse accents, languages, and noisy environments, often leveraging transformer-based architectures like Wav2Vec 2.0 and Conformer, and incorporating visual information for improved accuracy. Significant efforts focus on addressing biases in ASR models, enhancing efficiency through knowledge distillation and self-supervised learning, and developing methods for low-resource languages. These advancements are driving progress in various fields, including accessibility technologies, human-computer interaction, and language documentation.
Papers
TrimTail: Low-Latency Streaming ASR with Simple but Effective Spectrogram-Level Length Penalty
Xingchen Song, Di Wu, Zhiyong Wu, Binbin Zhang, Yuekai Zhang, Zhendong Peng, Wenpeng Li, Fuping Pan, Changbao Zhu
Speech-text based multi-modal training with bidirectional attention for improved speech recognition
Yuhang Yang, Haihua Xu, Hao Huang, Eng Siong Chng, Sheng Li