Motion Generation
Motion generation research focuses on creating realistic and controllable movement sequences from various inputs, such as text, audio, or video, primarily aiming to improve the realism, efficiency, and controllability of generated motions. Current research heavily utilizes diffusion models, transformers, and variational autoencoders, often incorporating techniques like latent space manipulation, attention mechanisms, and reinforcement learning to achieve fine-grained control and handle diverse modalities. This field is significant for its applications in animation, robotics, virtual reality, and autonomous driving, offering the potential to create more immersive and interactive experiences and improve human-robot collaboration.
Papers
Shared Autonomy via Variable Impedance Control and Virtual Potential Fields for Encoding Human Demonstration
Shail Jadav, Johannes Heidersberger, Christian Ott, Dongheui Lee
Understanding and Improving Training-free Loss-based Diffusion Guidance
Yifei Shen, Xinyang Jiang, Yezhen Wang, Yifan Yang, Dongqi Han, Dongsheng Li
LivePhoto: Real Image Animation with Text-guided Motion Control
Xi Chen, Zhiheng Liu, Mengting Chen, Yutong Feng, Yu Liu, Yujun Shen, Hengshuang Zhao
FG-MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions
Xu Shi, Wei Yao, Chuanchen Luo, Junran Peng, Hongwen Zhang, Yunlian Sun