Paper ID: 2207.12121
Cross-Modal Contrastive Representation Learning for Audio-to-Image Generation
HaeChun Chung, JooYong Shim, Jong-Kook Kim
Multiple modalities for certain information provide a variety of perspectives on that information, which can improve the understanding of the information. Thus, it may be crucial to generate data of different modality from the existing data to enhance the understanding. In this paper, we investigate the cross-modal audio-to-image generation problem and propose Cross-Modal Contrastive Representation Learning (CMCRL) to extract useful features from audios and use it in the generation phase. Experimental results show that CMCRL enhances quality of images generated than previous research.
Submitted: Jul 20, 2022