Ultrasonic Vocalization
Ultrasonic vocalization research focuses on understanding the acoustic properties and communicative functions of high-frequency sounds produced by various animals, including humans and other mammals. Current research employs machine learning techniques, such as deep neural networks (including convolutional and recurrent architectures) and Bayesian models, to automatically classify and analyze these vocalizations, often using spectrographic representations and Mel-frequency cepstral coefficients (MFCCs). This work has implications for diverse fields, including animal behavior studies, healthcare (e.g., detecting disease through vocal biomarkers), and human-computer interaction (e.g., improving human-robot communication). The development of robust and efficient automated analysis methods is a key focus, enabling large-scale studies and facilitating applications in diverse areas.
Papers
Sustained Vowels for Pre- vs Post-Treatment COPD Classification
Andreas Triantafyllopoulos, Anton Batliner, Wolfgang Mayr, Markus Fendler, Florian Pokorny, Maurice Gerczuk, Shahin Amiriparian, Thomas Berghaus, Björn Schuller
An automatic analysis of ultrasound vocalisations for the prediction of interaction context in captive Egyptian fruit bats
Andreas Triantafyllopoulos, Alexander Gebhard, Manuel Milling, Simon Rampp, Björn Schuller