Adaptive Speech
Adaptive speech research focuses on developing systems that dynamically adjust speech parameters to optimize intelligibility and user experience across diverse acoustic environments and user characteristics. Current efforts involve creating robust large language models incorporating dual encoders and prompt-aware adapters, as well as complex neural networks leveraging techniques like contrastive learning to improve acoustic echo cancellation and multi-dialect speech recognition. These advancements aim to enhance human-computer interaction, particularly in noisy or challenging acoustic conditions, and improve the accuracy and robustness of speech-based technologies.
Papers
May 15, 2024
March 31, 2024
October 30, 2022