Text to Music
Text-to-music research aims to generate musical audio or symbolic representations from textual descriptions, enabling users to create music through natural language. Current efforts focus on improving the quality and controllability of generated music using large language models (LLMs) to enhance datasets and refine diffusion models, as well as exploring model compression for wider accessibility. These advancements are significant for both music creation and the broader field of AI, offering new tools for composers and researchers while pushing the boundaries of cross-modal generation and representation learning.
Papers
October 2, 2024
July 23, 2024
June 24, 2024
June 17, 2024
May 15, 2024
January 15, 2024
October 4, 2023
August 10, 2023
August 9, 2023
April 21, 2023
January 26, 2023