Text to Music Generation
Text-to-music generation aims to create musical pieces from textual descriptions, bridging the gap between human language and musical expression. Current research heavily utilizes transformer-based and diffusion models, often incorporating large language models for enhanced control and longer, more structured compositions, and exploring multi-track generation for richer musical arrangements. This field is significant for its potential to democratize music creation, offering new tools for composers and musicians, and advancing our understanding of the relationship between language and music through the development of novel model architectures and datasets.
Papers
November 6, 2024
October 27, 2024
October 2, 2024
October 1, 2024
September 4, 2024
September 1, 2024
August 13, 2024
August 9, 2024
July 21, 2024
July 14, 2024
July 5, 2024
June 18, 2024
June 16, 2024
February 9, 2024
November 16, 2023
November 13, 2023
August 22, 2023
August 9, 2023
August 3, 2023