Indoor Scene Synthesis
Indoor scene synthesis focuses on automatically generating realistic 3D indoor environments, aiming to overcome the limitations of manual creation and provide large-scale datasets for various applications. Current research heavily utilizes diffusion models, autoregressive models, and neural radiance fields, often incorporating semantic priors (like scene graphs or depth information) to improve scene coherence, controllability (e.g., through text prompts), and style consistency. This field is crucial for advancing embodied AI, computer vision, and virtual/augmented reality by providing realistic and diverse training data and virtual environments for testing and development.
Papers
October 15, 2024
July 7, 2024
May 31, 2024
February 7, 2024
January 24, 2024
November 9, 2023
September 29, 2023
May 18, 2023
March 24, 2023
March 7, 2023
November 26, 2022
June 23, 2022
February 17, 2022