Synthetic Gesture
Synthetic gesture generation focuses on creating realistic and expressive artificial gestures, primarily for virtual agents and human-computer interfaces. Current research emphasizes generating gestures that are semantically aligned with speech, leveraging deep learning models like diffusion models and transformers, often incorporating multimodal inputs (audio, text, even scene context) to enhance naturalness and personalization. This field is crucial for advancing virtual human technology, improving human-robot interaction, and enabling more intuitive interfaces for applications ranging from assistive technologies to automotive systems.
Papers
August 7, 2024
July 12, 2024
March 16, 2024
September 23, 2023
September 17, 2023
September 11, 2023
September 8, 2023
August 11, 2023
March 26, 2023
March 16, 2023
March 15, 2023
January 24, 2023
May 14, 2022
May 13, 2022