High Quality Gesture
High-quality gesture generation focuses on creating natural and semantically meaningful gestures for robots and virtual agents, improving human-robot interaction and enhancing the realism of virtual characters. Current research emphasizes multimodal approaches, integrating speech, text, and visual data, often employing transformer-based architectures, diffusion models, and state space models to generate diverse and contextually appropriate gestures. This work is significant for advancing human-computer interaction, enabling more intuitive robot control, and creating more engaging and believable virtual characters across various applications, including robotics, virtual reality, and assistive technologies.
Papers
Integrating Representational Gestures into Automatically Generated Embodied Explanations and its Effects on Understanding and Interaction Quality
Amelie Sophie Robrecht, Hendric Voss, Lisa Gottschalk, Stefan Kopp
Deep self-supervised learning with visualisation for automatic gesture recognition
Fabien Allemand, Alessio Mazzela, Jun Villette, Decky Aspandi, Titus Zaharia