Motion Generation
Motion generation research focuses on creating realistic and controllable movement sequences from various inputs, such as text, audio, or video, primarily aiming to improve the realism, efficiency, and controllability of generated motions. Current research heavily utilizes diffusion models, transformers, and variational autoencoders, often incorporating techniques like latent space manipulation, attention mechanisms, and reinforcement learning to achieve fine-grained control and handle diverse modalities. This field is significant for its applications in animation, robotics, virtual reality, and autonomous driving, offering the potential to create more immersive and interactive experiences and improve human-robot collaboration.
Papers
AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation
Yukang Cao, Liang Pan, Kai Han, Kwan-Yee K. Wong, Ziwei Liu
LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and Captioning
Zhe Li, Weihao Yuan, Yisheng He, Lingteng Qiu, Shenhao Zhu, Xiaodong Gu, Weichao Shen, Yuan Dong, Zilong Dong, Laurence T. Yang
ReinDiffuse: Crafting Physically Plausible Motions with Reinforced Diffusion Model
Gaoge Han, Mingjiang Liang, Jinglei Tang, Yongkang Cheng, Wei Liu, Shaoli Huang