Articulatory Signal
Articulatory signal research focuses on understanding and modeling the movements of the vocal tract during speech production. Current research employs diverse deep learning approaches, including generative adversarial networks (GANs), masked autoencoders, and variational autoencoders (VAEs), to map between acoustic speech signals and articulatory movements, often using electromagnetic articulography (EMA) data for training and evaluation. This work is crucial for advancing speech synthesis, improving speech therapy for individuals with speech disorders (e.g., following oral cancer treatment), and providing a deeper understanding of the neuro-cognitive processes underlying speech production. Furthermore, ethical considerations surrounding the use of AI-generated articulatory data are increasingly being addressed.