Motion Generation
Motion generation research focuses on creating realistic and controllable movement sequences from various inputs, such as text, audio, or video, primarily aiming to improve the realism, efficiency, and controllability of generated motions. Current research heavily utilizes diffusion models, transformers, and variational autoencoders, often incorporating techniques like latent space manipulation, attention mechanisms, and reinforcement learning to achieve fine-grained control and handle diverse modalities. This field is significant for its applications in animation, robotics, virtual reality, and autonomous driving, offering the potential to create more immersive and interactive experiences and improve human-robot collaboration.
Papers
EnergyMoGen: Compositional Human Motion Generation with Energy-Based Diffusion Model in Latent Space
Jianrong Zhang, Hehe Fan, Yi Yang
ScaMo: Exploring the Scaling Law in Autoregressive Motion Generation Model
Shunlin Lu, Jingbo Wang, Zeyu Lu, Ling-Hao Chen, Wenxun Dai, Junting Dong, Zhiyang Dou, Bo Dai, Ruimao Zhang
The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion
Changan Chen, Juze Zhang, Shrinidhi K. Lakshmikanth, Yusu Fang, Ruizhi Shao, Gordon Wetzstein, Li Fei-Fei, Ehsan Adeli
MulSMo: Multimodal Stylized Motion Generation by Bidirectional Control Flow
Zhe Li, Yisheng He, Lei Zhong, Weichao Shen, Qi Zuo, Lingteng Qiu, Zilong Dong, Laurence Tianruo Yang, Weihao Yuan