Accompaniment Generation

Accompaniment generation focuses on automatically creating instrumental music to complement a given audio or symbolic musical input, such as vocals or a melody. Current research emphasizes real-time performance and high-quality output, employing diverse architectures including transformers, diffusion models, and variational autoencoders, often incorporating techniques like contrastive learning and beat tracking to improve coherence and stylistic consistency. This field is significant for its potential to enhance music production workflows, enabling musicians to create richer compositions more efficiently and fostering new forms of human-computer creative collaboration.

Papers