Speech Synthesis
Speech synthesis aims to generate human-like speech from text or other inputs, focusing on improving naturalness, expressiveness, and efficiency. Current research emphasizes advancements in model architectures like diffusion models, generative adversarial networks (GANs), and large language models (LLMs), often incorporating techniques such as low-rank adaptation (LoRA) for parameter efficiency and improved control over aspects like emotion and prosody. These improvements have significant implications for applications ranging from assistive technologies for the visually impaired to creating realistic virtual avatars and enhancing accessibility for under-resourced languages.
Papers
ToneUnit: A Speech Discretization Approach for Tonal Language Speech Synthesis
Dehua Tao, Daxin Tan, Yu Ting Yeung, Xiao Chen, Tan Lee
Generating Speakers by Prompting Listener Impressions for Pre-trained Multi-Speaker Text-to-Speech Systems
Zhengyang Chen, Xuechen Liu, Erica Cooper, Junichi Yamagishi, Yanmin Qian
TokSing: Singing Voice Synthesis based on Discrete Tokens
Yuning Wu, Chunlei zhang, Jiatong Shi, Yuxun Tang, Shan Yang, Qin Jin
FreeV: Free Lunch For Vocoders Through Pseudo Inversed Mel Filter
Yuanjun Lv, Hai Li, Ying Yan, Junhui Liu, Danming Xie, Lei Xie
PolySpeech: Exploring Unified Multitask Speech Models for Competitiveness with Single-task Models
Runyan Yang, Huibao Yang, Xiqing Zhang, Tiantian Ye, Ying Liu, Yingying Gao, Shilei Zhang, Chao Deng, Junlan Feng
JenGAN: Stacked Shifted Filters in GAN-Based Speech Synthesis
Hyunjae Cho, Junhyeok Lee, Wonbin Jung
MakeSinger: A Semi-Supervised Training Method for Data-Efficient Singing Voice Synthesis via Classifier-free Diffusion Guidance
Semin Kim, Myeonghun Jeong, Hyeonseung Lee, Minchan Kim, Byoung Jin Choi, Nam Soo Kim