Mutual Intelligibility
Mutual intelligibility, the ability of speakers of different languages to understand each other, is a complex phenomenon currently investigated through both psycholinguistic experiments and computational modeling. Research focuses on disentangling the contributions of phonetic features (like sounds) and prosodic features (like intonation) to comprehension, often employing machine learning models such as LSTMs with attention mechanisms to analyze speech signals and predict intelligibility levels. These studies are significant for advancing our understanding of language processing and have practical applications in areas like speech technology, language assessment, and the development of assistive technologies for individuals with speech impairments.
Papers
A Computational Model for the Assessment of Mutual Intelligibility Among Closely Related Languages
Jessica Nieder, Johann-Mattis List
On combining acoustic and modulation spectrograms in an attention LSTM-based system for speech intelligibility level classification
Ascensión Gallardo-Antolín, Juan M. Montero
An Attention Long Short-Term Memory based system for automatic classification of speech intelligibility
Miguel Fernández-Díaz, Ascensión Gallardo-Antolín