Music Information Retrieval
Music Information Retrieval (MIR) focuses on developing computational methods to analyze, organize, and retrieve information from music. Current research emphasizes improving automatic music transcription (using convolutional recurrent neural networks and transformers), developing robust genre classification models (often leveraging deep learning on specialized datasets), and creating explainable AI for tasks like difficulty estimation. These advancements are significant for music education, enhancing music discovery and recommendation systems, and fostering more effective human-computer musical collaboration.
Papers
On the Effectiveness of Speech Self-supervised Learning for Music
Yinghao Ma, Ruibin Yuan, Yizhi Li, Ge Zhang, Xingran Chen, Hanzhi Yin, Chenghua Lin, Emmanouil Benetos, Anton Ragni, Norbert Gyenge, Ruibo Liu, Gus Xia, Roger Dannenberg, Yike Guo, Jie Fu
Optimizing Feature Extraction for Symbolic Music
Federico Simonetta, Ana Llorens, Martín Serrano, Eduardo García-Portugués, Álvaro Torrente