Lip to Speech Synthesis

Lip-to-speech synthesis aims to reconstruct audible speech from silent video recordings of a person speaking, focusing on accurately capturing both the linguistic content and speaker characteristics. Current research emphasizes improving the quality and naturalness of synthesized speech, often employing deep learning architectures like GANs and VAEs, along with techniques such as self-supervised learning and multi-task learning to address challenges like homophones and asynchronous audio-visual data. These advancements hold significant potential for applications in assistive technologies for individuals with speech impairments and for creating more realistic virtual avatars, while also contributing to a deeper understanding of audio-visual speech processing.

Papers