Motion Diffusion Model
Motion diffusion models are generative AI models designed to create realistic and diverse human motion sequences, often conditioned on textual descriptions, music, or other inputs. Current research focuses on improving controllability, temporal consistency, and the handling of complex interactions (e.g., multi-person motions) using architectures like diffusion U-Nets and transformers, often incorporating techniques like latent space diffusion and physics-based constraints. These advancements are significant for applications in computer animation, robotics, and virtual reality, offering more efficient and expressive methods for generating human-like movement.
Papers
October 14, 2024
October 9, 2024
September 18, 2024
July 22, 2024
July 17, 2024
July 15, 2024
June 25, 2024
May 10, 2024
March 28, 2024
March 7, 2024
December 20, 2023
December 19, 2023
December 15, 2023
December 4, 2023
November 27, 2023
November 2, 2023
August 28, 2023
August 5, 2023
April 12, 2023