Automatic Speech Recognition
Automatic Speech Recognition (ASR) aims to accurately transcribe spoken language into text, driving research into robust and efficient models. Current efforts focus on improving accuracy and robustness through techniques like consistency regularization in Connectionist Temporal Classification (CTC), leveraging pre-trained multilingual models for low-resource languages, and integrating Large Language Models (LLMs) for enhanced contextual understanding and improved handling of diverse accents and speech disorders. These advancements have significant implications for accessibility, enabling applications in diverse fields such as healthcare, education, and human-computer interaction.
Papers
Careful Whisper -- leveraging advances in automatic speech recognition for robust and interpretable aphasia subtype classification
Laurin Wagner, Mario Zusag, Theresa Bloder
Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time
Xinfeng Li, Chen Yan, Xuancun Lu, Zihan Zeng, Xiaoyu Ji, Wenyuan Xu
Boosting Punctuation Restoration with Data Generation and Reinforcement Learning
Viet Dac Lai, Abel Salinas, Hao Tan, Trung Bui, Quan Tran, Seunghyun Yoon, Hanieh Deilamsalehy, Franck Dernoncourt, Thien Huu Nguyen
Integration of Frame- and Label-synchronous Beam Search for Streaming Encoder-decoder Speech Recognition
Emiru Tsunoo, Hayato Futami, Yosuke Kashiwagi, Siddhant Arora, Shinji Watanabe
Code-Switched Urdu ASR for Noisy Telephonic Environment using Data Centric Approach with Hybrid HMM and CNN-TDNN
Muhammad Danyal Khan, Raheem Ali, Arshad Aziz
Adaptation of Whisper models to child speech recognition
Rishabh Jain, Andrei Barcovschi, Mariam Yiwere, Peter Corcoran, Horia Cucu
A Model for Every User and Budget: Label-Free and Personalized Mixed-Precision Quantization
Edward Fish, Umberto Michieli, Mete Ozay