3D Scene
3D scene representation and manipulation are active research areas aiming to create realistic and editable digital environments. Current efforts focus on developing efficient and robust algorithms, such as Gaussian splatting and neural radiance fields (NeRFs), to reconstruct scenes from various data sources (images, videos, point clouds) and handle challenges like occlusions, dynamic objects, and adverse weather conditions. These advancements are driving progress in applications ranging from autonomous driving and virtual/augmented reality to cultural heritage preservation and interactive 3D content creation. The development of generalizable models capable of handling large-scale scenes and diverse tasks is a key focus.
Papers
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior
Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, Dong Chen
CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D Scene Layout
Haotian Bai, Yuanhuiyi Lyu, Lutao Jiang, Sijia Li, Haonan Lu, Xiaodong Lin, Lin Wang
3D Neural Embedding Likelihood: Probabilistic Inverse Graphics for Robust 6D Pose Estimation
Guangyao Zhou, Nishad Gothoskar, Lirui Wang, Joshua B. Tenenbaum, Dan Gutfreund, Miguel Lázaro-Gredilla, Dileep George, Vikash K. Mansinghka
Structured Generative Models for Scene Understanding
Christopher K. I. Williams