Music Emotion Recognition

Music emotion recognition (MER) aims to automatically identify the emotional content of music, leveraging both acoustic features (like timbre and rhythm) and symbolic representations (like MIDI data and lyrics). Current research emphasizes multimodal approaches, combining audio and textual information, and explores diverse model architectures including deep neural networks, state-space models, and various feature selection techniques to improve accuracy and address inherent biases in emotion labeling. Advances in MER have implications for music information retrieval, personalized music recommendations, and applications in fields like affective computing and mental health, offering potential for more nuanced and objective understanding of the emotional impact of music.

Papers