Paper ID: 2404.14745
TAAT: Think and Act from Arbitrary Texts in Text2Motion
Runqi Wang, Caoyuan Ma, Guopeng Li, Zheng Wang
Text to Motion aims to generate human motions from texts. Existing settings assume that texts include action labels, which limits flexibility in practical scenarios. This paper extends this task with a more realistic assumption that the texts are arbitrary. Specifically, in our setting, arbitrary texts include existing action texts composed of action labels and introduce scene texts without explicit action labels. To address this practical issue, we extend the action texts in the HUMANML3D dataset by incorporating additional scene texts, thereby creating a new dataset, HUMANML3D++. Concurrently, we propose a simple framework that extracts action representations from arbitrary texts using a Large Language Model (LLM) and subsequently generates motions. Furthermore, we enhance the existing evaluation methodologies to address their inadequacies. Extensive experiments are conducted under different application scenarios to validate the effectiveness of the proposed framework on existing and proposed datasets. The results indicate that Text to Motion in this realistic setting is very challenging, fostering new research in this practical direction. Our dataset and code will be released.
Submitted: Apr 23, 2024