Motion Synthesis
Motion synthesis aims to generate realistic human and animal movements from various inputs, such as text, audio, or sparse sensor data, primarily to create lifelike animations and interactive experiences. Current research heavily utilizes diffusion models and transformers, often incorporating techniques like autoregressive generation, attention mechanisms, and multi-modal conditioning to improve motion coherence, detail, and controllability. This field is significant for its applications in animation, gaming, virtual reality, and robotics, as well as for its potential to advance our understanding of human and animal movement through the creation of large-scale synthetic datasets.
Papers
MotionLLaMA: A Unified Framework for Motion Synthesis and Comprehension
Zeyu Ling, Bo Han, Shiyang Li, Hongdeng Shen, Jikang Cheng, Changqing Zou
I2VControl: Disentangled and Unified Video Motion Synthesis Control
Wanquan Feng, Tianhao Qi, Jiawei Liu, Mingzhen Sun, Pengqi Tu, Tianxiang Ma, Fei Dai, Songtao Zhao, Siyu Zhou, Qian He