Motion Generation
Motion generation research focuses on creating realistic and controllable movement sequences from various inputs, such as text, audio, or video, primarily aiming to improve the realism, efficiency, and controllability of generated motions. Current research heavily utilizes diffusion models, transformers, and variational autoencoders, often incorporating techniques like latent space manipulation, attention mechanisms, and reinforcement learning to achieve fine-grained control and handle diverse modalities. This field is significant for its applications in animation, robotics, virtual reality, and autonomous driving, offering the potential to create more immersive and interactive experiences and improve human-robot collaboration.
Papers
VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning
Yu-Shun Hsiao, Siva Kumar Sastry Hari, Balakumar Sundaralingam, Jason Yik, Thierry Tambe, Charbel Sakr, Stephen W. Keckler, Vijay Janapa Reddi
CoPAL: Corrective Planning of Robot Actions with Large Language Models
Frank Joublin, Antonello Ceravola, Pavel Smirnov, Felix Ocker, Joerg Deigmoeller, Anna Belardinelli, Chao Wang, Stephan Hasler, Daniel Tanneberg, Michael Gienger