Paper ID: 2208.03244
Real-time Gesture Animation Generation from Speech for Virtual Human Interaction
Manuel Rebol, Christian Gütl, Krzysztof Pietroszek
We propose a real-time system for synthesizing gestures directly from speech. Our data-driven approach is based on Generative Adversarial Neural Networks to model the speech-gesture relationship. We utilize the large amount of speaker video data available online to train our 3D gesture model. Our model generates speaker-specific gestures by taking consecutive audio input chunks of two seconds in length. We animate the predicted gestures on a virtual avatar. We achieve a delay below three seconds between the time of audio input and gesture animation. Code and videos are available at https://github.com/mrebol/Gestures-From-Speech
Submitted: Aug 5, 2022