Automatic Speech Recognition
Automatic Speech Recognition (ASR) aims to accurately transcribe spoken language into text, driving research into robust and efficient models. Current efforts focus on improving accuracy and robustness through techniques like consistency regularization in Connectionist Temporal Classification (CTC), leveraging pre-trained multilingual models for low-resource languages, and integrating Large Language Models (LLMs) for enhanced contextual understanding and improved handling of diverse accents and speech disorders. These advancements have significant implications for accessibility, enabling applications in diverse fields such as healthcare, education, and human-computer interaction.
Papers
DCTX-Conformer: Dynamic context carry-over for low latency unified streaming and non-streaming Conformer ASR
Goeric Huybrechts, Srikanth Ronanki, Xilai Li, Hadis Nosrati, Sravan Bodapati, Katrin Kirchhoff
Large-scale Language Model Rescoring on Long-form Data
Tongzhou Chen, Cyril Allauzen, Yinghui Huang, Daniel Park, David Rybach, W. Ronny Huang, Rodrigo Cabrera, Kartik Audhkhasi, Bhuvana Ramabhadran, Pedro J. Moreno, Michael Riley
Multi-View Frequency-Attention Alternative to CNN Frontends for Automatic Speech Recognition
Belen Alastruey, Lukas Drude, Jahn Heymann, Simon Wiesler
On the N-gram Approximation of Pre-trained Language Models
Aravind Krishnan, Jesujoba Alabi, Dietrich Klakow
Multimodal Audio-textual Architecture for Robust Spoken Language Understanding
Anderson R. Avila, Mehdi Rezagholizadeh, Chao Xing
Developing Speech Processing Pipelines for Police Accountability
Anjalie Field, Prateek Verma, Nay San, Jennifer L. Eberhardt, Dan Jurafsky
A Theory of Unsupervised Speech Recognition
Liming Wang, Mark Hasegawa-Johnson, Chang D. Yoo
Improving Frame-level Classifier for Word Timings with Non-peaky CTC in End-to-End Automatic Speech Recognition
Xianzhao Chen, Yist Y. Lin, Kang Wang, Yi He, Zejun Ma
Lenient Evaluation of Japanese Speech Recognition: Modeling Naturally Occurring Spelling Inconsistency
Shigeki Karita, Richard Sproat, Haruko Ishikawa
A study on the impact of Self-Supervised Learning on automatic dysarthric speech assessment
Xavier F. Cadet, Ranya Aloufi, Sara Ahmadi-Abhari, Hamed Haddadi
Transfer Learning from Pre-trained Language Models Improves End-to-End Speech Summarization
Kohei Matsuura, Takanori Ashihara, Takafumi Moriya, Tomohiro Tanaka, Takatomo Kano, Atsunori Ogawa, Marc Delcroix
An ASR-Based Tutor for Learning to Read: How to Optimize Feedback to First Graders
Yu Bai, Cristian Tejedor-Garcia, Ferdy Hubers, Catia Cucchiarini, Helmer Strik
Improving Fairness and Robustness in End-to-End Speech Recognition through unsupervised clustering
Irina-Elena Veliche, Pascale Fung
Automatic Assessment of Oral Reading Accuracy for Reading Diagnostics
Bo Molenaar, Cristian Tejedor-Garcia, Helmer Strik, Catia Cucchiarini
Alzheimer Disease Classification through ASR-based Transcriptions: Exploring the Impact of Punctuation and Pauses
Lucía Gómez-Zaragozá, Simone Wills, Cristian Tejedor-Garcia, Javier Marín-Morales, Mariano Alcañiz, Helmer Strik
SpellMapper: A non-autoregressive neural spellchecker for ASR customization with candidate retrieval based on n-gram mappings
Alexandra Antonova, Evelina Bakhturina, Boris Ginsburg
End-to-End Joint Target and Non-Target Speakers ASR
Ryo Masumura, Naoki Makishima, Taiga Yamane, Yoshihiko Yamazaki, Saki Mizuno, Mana Ihori, Mihiro Uchida, Keita Suzuki, Hiroshi Sato, Tomohiro Tanaka, Akihiko Takashima, Satoshi Suzuki, Takafumi Moriya, Nobukatsu Hojo, Atsushi Ando