Ultrasonic Vocalization

Ultrasonic vocalization research focuses on understanding the acoustic properties and communicative functions of high-frequency sounds produced by various animals, including humans and other mammals. Current research employs machine learning techniques, such as deep neural networks (including convolutional and recurrent architectures) and Bayesian models, to automatically classify and analyze these vocalizations, often using spectrographic representations and Mel-frequency cepstral coefficients (MFCCs). This work has implications for diverse fields, including animal behavior studies, healthcare (e.g., detecting disease through vocal biomarkers), and human-computer interaction (e.g., improving human-robot communication). The development of robust and efficient automated analysis methods is a key focus, enabling large-scale studies and facilitating applications in diverse areas.

Papers