Gesture Generation
Gesture generation focuses on creating realistic and contextually appropriate movements to accompany speech or text, primarily for virtual agents and robots to enhance human-computer interaction. Current research heavily utilizes deep learning models, particularly diffusion models and transformers, often incorporating multimodal data (audio, text, video) to improve the naturalness and semantic coherence of generated gestures. This field is significant for advancing human-robot interaction, virtual character animation, and accessibility technologies by enabling more natural and expressive communication.
Papers
October 4, 2023
September 17, 2023
August 29, 2023
August 24, 2023
August 8, 2023
June 20, 2023
March 26, 2023
March 23, 2023
March 15, 2023
January 13, 2023
October 13, 2022
September 15, 2022
August 25, 2022
August 22, 2022
August 5, 2022
August 3, 2022