3D Scene Generation
3D scene generation aims to automatically create realistic and diverse three-dimensional environments using computational methods, primarily leveraging machine learning techniques. Current research heavily focuses on diffusion models, generative adversarial networks (GANs), and transformers, often incorporating techniques like Gaussian splatting and latent tree representations to improve scene quality, consistency, and scalability. This field is significant for its applications in robotics, virtual and augmented reality, gaming, and film, offering powerful tools for creating immersive experiences and simulations. Furthermore, advancements in 3D scene generation are driving progress in related areas like neural rendering and procedural content generation.
Papers
Scene Splatter: Momentum 3D Scene Generation from Single Image with Video Diffusion Model
Shengjun Zhang, Jinzhao Li, Xin Fei, Hao Liu, Yueqi DuanTsinghua University●Tecent Inc.WonderTurbo: Generating Interactive 3D World in 0.72 Seconds
Chaojun Ni, Xiaofeng Wang, Zheng Zhu, Weijie Wang, Haoyun Li, Guosheng Zhao, Jie Li, Wenkang Qin, Guan Huang, Wenjun MeiGigaAI●Peking University●Chinese Academy of Sciences●Zhejiang University