Spoken Language Understanding
Spoken Language Understanding (SLU) focuses on enabling computers to comprehend human speech, aiming to extract meaning and intent from spoken dialogue. Current research emphasizes improving the robustness and accuracy of SLU systems, particularly in handling noisy speech, low-resource languages, and out-of-distribution data, often employing large language models (LLMs) and contrastive learning techniques within various architectures like end-to-end models and hybrid approaches combining speech encoders with LLMs. Advances in SLU are crucial for enhancing human-computer interaction in applications such as virtual assistants, improving accessibility for diverse languages, and advancing the broader field of artificial intelligence.
Papers
Can ChatGPT Detect Intent? Evaluating Large Language Models for Spoken Language Understanding
Mutian He, Philip N. Garner
Extrapolating Multilingual Understanding Models as Multilingual Generators
Bohong Wu, Fei Yuan, Hai Zhao, Lei Li, Jingjing Xu
Exploring Speaker-Related Information in Spoken Language Understanding for Better Speaker Diarization
Luyao Cheng, Siqi Zheng, Zhang Qinglin, Hui Wang, Yafeng Chen, Qian Chen
A Study on the Integration of Pipeline and E2E SLU systems for Spoken Semantic Parsing toward STOP Quality Challenge
Siddhant Arora, Hayato Futami, Shih-Lun Wu, Jessica Huynh, Yifan Peng, Yosuke Kashiwagi, Emiru Tsunoo, Brian Yan, Shinji Watanabe
The Pipeline System of ASR and NLU with MLM-based Data Augmentation toward STOP Low-resource Challenge
Hayato Futami, Jessica Huynh, Siddhant Arora, Shih-Lun Wu, Yosuke Kashiwagi, Yifan Peng, Brian Yan, Emiru Tsunoo, Shinji Watanabe