Cross Modal Music
Cross-modal music research focuses on understanding and generating music by integrating multiple modalities like audio, lyrics, sheet music, and even motion capture data. Current efforts concentrate on developing robust cross-modal retrieval systems, often employing deep learning architectures like contrastive learning and diffusion models, to link audio and symbolic representations or generate music from textual or visual inputs. This work is significant for advancing music information retrieval, enabling new forms of music generation and analysis, and improving the interpretability of complex music understanding models.
Papers
October 17, 2024
September 16, 2024
June 10, 2024
January 26, 2024
November 19, 2023
September 21, 2023
Towards Robust and Truly Large-Scale Audio-Sheet Music Retrieval
Luis Carvalho, Gerhard Widmer
Self-Supervised Contrastive Learning for Robust Audio-Sheet Music Retrieval Systems
Luis Carvalho, Tobias Washüttl, Gerhard Widmer
Passage Summarization with Recurrent Models for Audio-Sheet Music Retrieval
Luis Carvalho, Gerhard Widmer
April 21, 2023
August 24, 2022
June 15, 2022