Speech Processing
Speech processing research aims to enable computers to understand, interpret, and generate human speech, focusing on tasks like speech recognition, synthesis, and enhancement. Current efforts concentrate on improving model efficiency (e.g., using linear-complexity attention mechanisms) and robustness across diverse languages and acoustic conditions, often leveraging large language models and self-supervised learning techniques. These advancements are crucial for broader accessibility of speech technology, impacting fields ranging from healthcare (e.g., depression screening) to assistive technologies and improving human-computer interaction.
Papers
Exploration on HuBERT with Multiple Resolutions
Jiatong Shi, Yun Tang, Hirofumi Inaguma, Hongyu GOng, Juan Pino, Shinji Watanabe
How Generative Spoken Language Modeling Encodes Noisy Speech: Investigation from Phonetics to Syntactics
Joonyong Park, Shinnosuke Takamichi, Tomohiko Nakamura, Kentaro Seki, Detai Xin, Hiroshi Saruwatari
A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model
Srijith Radhakrishnan, Chao-Han Huck Yang, Sumeer Ahmad Khan, Narsis A. Kiani, David Gomez-Cabrero, Jesper N. Tegner
FunASR: A Fundamental End-to-End Speech Recognition Toolkit
Zhifu Gao, Zerui Li, Jiaming Wang, Haoneng Luo, Xian Shi, Mengzhe Chen, Yabin Li, Lingyun Zuo, Zhihao Du, Zhangyu Xiao, Shiliang Zhang
MUG: A General Meeting Understanding and Generation Benchmark
Qinglin Zhang, Chong Deng, Jiaqing Liu, Hai Yu, Qian Chen, Wen Wang, Zhijie Yan, Jinglin Liu, Yi Ren, Zhou Zhao
Overview of the ICASSP 2023 General Meeting Understanding and Generation Challenge (MUG)
Qinglin Zhang, Chong Deng, Jiaqing Liu, Hai Yu, Qian Chen, Wen Wang, Zhijie Yan, Jinglin Liu, Yi Ren, Zhou Zhao