Diverse Motion
Diverse motion generation focuses on creating realistic and varied human or robotic movements, driven by various inputs like text, audio, or task specifications. Current research emphasizes developing generative models, including diffusion models and those incorporating reinforcement learning or adversarial training, to achieve both high-quality motion and diverse outputs, often addressing limitations in existing datasets by creating larger, more varied training sets. This field is significant for advancing robotics, animation, and virtual reality, enabling more natural and expressive interactions in these applications.
Papers
July 23, 2024
June 20, 2024
October 3, 2023
September 4, 2023
May 5, 2023
April 11, 2023
April 3, 2023
January 17, 2023
September 16, 2022
June 16, 2022