Gesture Generation
Gesture generation focuses on creating realistic and contextually appropriate movements to accompany speech or text, primarily for virtual agents and robots to enhance human-computer interaction. Current research heavily utilizes deep learning models, particularly diffusion models and transformers, often incorporating multimodal data (audio, text, video) to improve the naturalness and semantic coherence of generated gestures. This field is significant for advancing human-robot interaction, virtual character animation, and accessibility technologies by enabling more natural and expressive communication.
Papers
October 27, 2024
October 21, 2024
October 17, 2024
October 12, 2024
October 8, 2024
October 1, 2024
September 30, 2024
September 26, 2024
September 24, 2024
September 16, 2024
September 8, 2024
August 7, 2024
August 6, 2024
May 27, 2024
May 13, 2024
April 23, 2024
March 26, 2024
January 9, 2024
January 7, 2024