Imagined Speech
Imagined speech research focuses on decoding brain activity associated with silently spoken words to create a non-invasive brain-computer interface (BCI). Current efforts utilize advanced machine learning techniques, including deep neural networks (like those incorporating ANFIS units or diffusion-based models), and leverage high-density fNIRS or EEG data to improve decoding accuracy and enable continuous, open-vocabulary imagined speech. This field holds significant promise for revolutionizing human-computer interaction, particularly for individuals with communication impairments, and is driving advancements in both AI and neuroscience.
Papers
MindGPT: Advancing Human-AI Interaction with Non-Invasive fNIRS-Based Imagined Speech Decoding
Suyi Zhang, Ekram Alam, Jack Baber, Francesca Bianco, Edward Turner, Maysam Chamanzar, Hamid Dehghani
MindSpeech: Continuous Imagined Speech Decoding using High-Density fNIRS and Prompt Tuning for Advanced Human-AI Interaction
Suyi Zhang, Ekram Alam, Jack Baber, Francesca Bianco, Edward Turner, Maysam Chamanzar, Hamid Dehghani