Human Motion Generation
Human motion generation aims to create realistic and controllable human movements using computational models, often driven by textual descriptions, audio, or other modalities. Current research heavily utilizes diffusion models, often enhanced with techniques like autoregressive generation, reinforcement learning, and retrieval-augmented generation, to improve motion realism, temporal coherence, and controllability, particularly for long and complex sequences. This field is significant for its applications in animation, robotics, virtual reality, and other areas requiring lifelike human-like movement, while also advancing our understanding of human motion itself through the development of novel evaluation metrics grounded in human perception.
Papers
Move-in-2D: 2D-Conditioned Human Motion Generation
Hsin-Ping Huang, Yang Zhou, Jui-Hsien Wang, Difan Liu, Feng Liu, Ming-Hsuan Yang, Zhan Xu
Motion-2-to-3: Leveraging 2D Motion Data to Boost 3D Motion Generation
Huaijin Pi, Ruoxi Guo, Zehong Shen, Qing Shuai, Zechen Hu, Zhumei Wang, Yajiao Dong, Ruizhen Hu, Taku Komura, Sida Peng, Xiaowei Zhou