Music Dance
Music-dance generation research focuses on creating realistic and expressive dance sequences synchronized with music. Current efforts leverage advanced deep learning models, particularly diffusion models, often employing a two-stage approach (coarse-to-fine or bidirectional) to capture both global musical structure and fine-grained motion details. These models are trained on increasingly large datasets and evaluated using metrics that assess not only motion quality and rhythm but also the stylistic consistency between the generated dance and the input music. This research contributes to both the understanding of human movement and the development of novel creative tools for artists and animators.
Papers
March 15, 2024
February 6, 2024
December 26, 2023
September 4, 2023
January 30, 2023