Articulatory Synthesis
Articulatory synthesis focuses on generating speech from descriptions of the movements of the vocal tract (articulators), offering a more natural and interpretable approach to speech synthesis than traditional methods. Current research emphasizes developing efficient and high-quality models, often employing deep learning architectures like autoencoders, generative adversarial networks (GANs), and differentiable digital signal processing (DDSP), to map articulatory features (e.g., from electromagnetic articulography) to speech waveforms. This approach holds significant promise for improving speech synthesis quality, enabling better control over synthesized speech, and facilitating research into speech production and disorders.
Papers
November 4, 2024
September 4, 2024
June 18, 2024
November 25, 2023
September 16, 2023
July 10, 2023
July 5, 2023
May 18, 2023
October 29, 2022
October 27, 2022
September 13, 2022
April 20, 2022
April 5, 2022
March 31, 2022