Acoustic Cue
Acoustic cues, the non-verbal information conveyed through sound in speech and vocalizations, are increasingly studied for their potential to reveal a wealth of information beyond the literal meaning of words. Current research focuses on leveraging these cues—analyzed using machine learning models, including deep learning architectures like convolutional and recurrent neural networks—to predict various attributes such as age, gender, emotion, and even the presence of neurological conditions like autism. This work has significant implications for diverse fields, ranging from improving human-computer interaction and assistive technologies to enhancing the understanding of human communication and social dynamics.
Papers
April 3, 2024
March 1, 2024
February 20, 2024
February 5, 2024
December 29, 2023
September 21, 2023
April 10, 2023
January 14, 2023
September 13, 2022
April 10, 2022
April 6, 2022
February 18, 2022