Speech Generation
Speech generation research aims to create systems that produce natural-sounding and expressive speech from various inputs, such as text or other audio. Current efforts focus on improving model efficiency and controllability, exploring architectures like autoregressive and non-autoregressive models, flow matching, and diffusion models, often incorporating discrete speech units and leveraging techniques like prompting and knowledge distillation. These advancements are significant for applications ranging from virtual assistants and accessibility tools to creative content generation and voice privacy technologies, driving progress in both speech processing and artificial intelligence.
Papers
Single-stage TTS with Masked Audio Token Modeling and Semantic Knowledge Distillation
Gerard I. Gállego, Roy Fejgin, Chunghsin Yeh, Xiaoyu Liu, Gautam Bhattacharya
Enhancing Multilingual Speech Generation and Recognition Abilities in LLMs with Constructed Code-switched Data
Jing Xu, Daxin Tan, Jiaqi Wang, Xiao Chen