Scene Diffusion
Scene diffusion leverages diffusion models to generate realistic and controllable 3D scenes and related data, such as street views and indoor environments. Current research focuses on improving the scalability and fidelity of scene generation using hierarchical latent representations (e.g., latent trees), incorporating scene graphs to guide object placement and relationships, and enabling control through text prompts, bounding boxes, or other high-level specifications. This work is significant for advancing 3D content creation, facilitating applications in autonomous driving (e.g., realistic traffic simulation), and providing tools for interactive scene design and manipulation.
Papers
October 18, 2024
September 12, 2024
July 15, 2024
July 8, 2024
May 2, 2024
August 8, 2023
June 10, 2023
May 29, 2023
May 25, 2023
April 28, 2023
March 24, 2023