Motion Generation
Motion generation research focuses on creating realistic and controllable movement sequences from various inputs, such as text, audio, or video, primarily aiming to improve the realism, efficiency, and controllability of generated motions. Current research heavily utilizes diffusion models, transformers, and variational autoencoders, often incorporating techniques like latent space manipulation, attention mechanisms, and reinforcement learning to achieve fine-grained control and handle diverse modalities. This field is significant for its applications in animation, robotics, virtual reality, and autonomous driving, offering the potential to create more immersive and interactive experiences and improve human-robot collaboration.
Papers
Shared Autonomy via Variable Impedance Control and Virtual Potential Fields for Encoding Human Demonstration
Shail Jadav, Johannes Heidersberger, Christian Ott, Dongheui Lee
Understanding and Improving Training-free Loss-based Diffusion Guidance
Yifei Shen, Xinyang Jiang, Yezhen Wang, Yifan Yang, Dongqi Han, Dongsheng Li