Speech Synthesis
Speech synthesis aims to generate human-like speech from text or other inputs, focusing on improving naturalness, expressiveness, and efficiency. Current research emphasizes advancements in model architectures like diffusion models, generative adversarial networks (GANs), and large language models (LLMs), often incorporating techniques such as low-rank adaptation (LoRA) for parameter efficiency and improved control over aspects like emotion and prosody. These improvements have significant implications for applications ranging from assistive technologies for the visually impaired to creating realistic virtual avatars and enhancing accessibility for under-resourced languages.
Papers
Accelerating Codec-based Speech Synthesis with Multi-Token Prediction and Speculative Decoding
Tan Dat Nguyen, Ji-Hoon Kim, Jeongsoo Choi, Shukjae Choi, Jinseok Park, Younglo Lee, Joon Son Chung
DART: Disentanglement of Accent and Speaker Representation in Multispeaker Text-to-Speech
Jan Melechovsky, Ambuj Mehrish, Berrak Sisman, Dorien Herremans
DMDSpeech: Distilled Diffusion Model Surpassing The Teacher in Zero-shot Speech Synthesis via Direct Metric Optimization
Yingahao Aaron Li, Rithesh Kumar, Zeyu Jin
Everyday Speech in the Indian Subcontinent
Utkarsh Pathak (1), Chandra Sai Krishna Gunda (1), Sujitha Sathiyamoorthy (1), Keshav Agarwal (1), Hema A. Murthy (1) ((1) Indian Institute of Technology, Madras)
SpMis: An Investigation of Synthetic Spoken Misinformation Detection
Peizhuo Liu, Li Wang, Renqiang He, Haorui He, Lei Wang, Huadi Zheng, Jie Shi, Tong Xiao, Zhizheng Wu
Single-stage TTS with Masked Audio Token Modeling and Semantic Knowledge Distillation
Gerard I. Gállego, Roy Fejgin, Chunghsin Yeh, Xiaoyu Liu, Gautam Bhattacharya
Enhancing Multilingual Speech Generation and Recognition Abilities in LLMs with Constructed Code-switched Data
Jing Xu, Daxin Tan, Jiaqi Wang, Xiao Chen