3D Indoor Scene
3D indoor scene generation focuses on creating realistic and diverse virtual indoor environments, primarily aiming to automate the design process and provide data for various applications. Current research heavily utilizes generative models, including diffusion models, GANs, and transformer-based architectures, often incorporating scene graphs, floor plans, and even human motion data as conditioning factors to improve realism and controllability. This field is significant for its potential impact on fields like gaming, virtual and augmented reality, robotics, and computer vision, providing both high-quality synthetic data for training and enabling new interactive applications.
Papers
RoSI: Recovering 3D Shape Interiors from Few Articulation Images
Akshay Gadi Patil, Yiming Qian, Shan Yang, Brian Jackson, Eric Bennett, Hao Zhang
NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds
Chen Yang, Peihao Li, Zanwei Zhou, Shanxin Yuan, Bingbing Liu, Xiaokang Yang, Weichao Qiu, Wei Shen