Indoor Scene Synthesis

Indoor scene synthesis focuses on automatically generating realistic 3D indoor environments, aiming to overcome the limitations of manual creation and provide large-scale datasets for various applications. Current research heavily utilizes diffusion models, autoregressive models, and neural radiance fields, often incorporating semantic priors (like scene graphs or depth information) to improve scene coherence, controllability (e.g., through text prompts), and style consistency. This field is crucial for advancing embodied AI, computer vision, and virtual/augmented reality by providing realistic and diverse training data and virtual environments for testing and development.

Papers