Paper ID: 2412.11198 • Published Dec 15, 2024
GEM: A Generalizable Ego-Vision Multimodal World Model for Fine-Grained Ego-Motion, Object Dynamics, and Scene Composition Control
Mariam Hassan, Sebastian Stapf, Ahmad Rahimi, Pedro M B Rezende, Yasaman Haghighi, David Brüggemann, Isinsu Katircioglu...
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
We present GEM, a Generalizable Ego-vision Multimodal world model that
predicts future frames using a reference frame, sparse features, human poses,
and ego-trajectories. Hence, our model has precise control over object
dynamics, ego-agent motion and human poses. GEM generates paired RGB and depth
outputs for richer spatial understanding. We introduce autoregressive noise
schedules to enable stable long-horizon generations. Our dataset is comprised
of 4000+ hours of multimodal data across domains like autonomous driving,
egocentric human activities, and drone flights. Pseudo-labels are used to get
depth maps, ego-trajectories, and human poses. We use a comprehensive
evaluation framework, including a new Control of Object Manipulation (COM)
metric, to assess controllability. Experiments show GEM excels at generating
diverse, controllable scenarios and temporal consistency over long generations.
Code, models, and datasets are fully open-sourced.