Synthetic Gesture

Synthetic gesture generation focuses on creating realistic and expressive artificial gestures, primarily for virtual agents and human-computer interfaces. Current research emphasizes generating gestures that are semantically aligned with speech, leveraging deep learning models like diffusion models and transformers, often incorporating multimodal inputs (audio, text, even scene context) to enhance naturalness and personalization. This field is crucial for advancing virtual human technology, improving human-robot interaction, and enabling more intuitive interfaces for applications ranging from assistive technologies to automotive systems.

Papers