Joint Speech

Joint speech research focuses on integrating multiple modalities, such as speech, text, and visual data, to improve the performance of various tasks, including speech recognition, accent recognition, and gesture generation. Current research emphasizes developing multi-modal models, often employing architectures like connectionist temporal classification (CTC), encoder-decoder conformers, and diffusion models, to effectively fuse information from different sources and leverage their complementary strengths. These advancements lead to improved accuracy and robustness in applications ranging from human-computer interaction to assistive technologies, particularly in handling challenging scenarios like accented speech or noisy environments. The resulting improvements in accuracy and efficiency have significant implications for various fields, including healthcare, education, and entertainment.

Papers