Urban Sound

Urban sound research focuses on understanding and managing the complex acoustic environments of cities, aiming to improve quality of life and inform urban planning. Current research employs machine learning, particularly deep learning models like Convolutional Neural Networks (CNNs) and Transformers, to classify and analyze audio data, often using Mel-frequency cepstral coefficients (MFCCs) as features. This work is driven by the need for more nuanced representations of urban soundscapes beyond simple decibel measurements, incorporating both objective sound event detection and subjective human perception of annoyance. Applications range from smart city noise monitoring and mitigation to enhancing the environmental awareness of autonomous vehicles.

Papers