Indoor Scene Synthesis
Indoor scene synthesis focuses on automatically generating realistic 3D indoor environments, aiming to overcome the limitations of manual creation and provide large-scale datasets for various applications. Current research heavily utilizes diffusion models, autoregressive models, and neural radiance fields, often incorporating semantic priors (like scene graphs or depth information) to improve scene coherence, controllability (e.g., through text prompts), and style consistency. This field is crucial for advancing embodied AI, computer vision, and virtual/augmented reality by providing realistic and diverse training data and virtual environments for testing and development.
14papers
Papers
January 22, 2025
December 23, 2024
February 7, 2024
September 29, 2023
November 26, 2022
February 17, 2022