Synthetic Scene
Synthetic scene generation focuses on creating realistic virtual environments for various applications, primarily driven by the need for large, diverse datasets to train and evaluate computer vision and robotics models. Current research emphasizes developing advanced generative models, including diffusion models and neural radiance fields (NeRFs), often conditioned on semantic maps or other scene representations to improve control and realism. These advancements are crucial for improving the performance of AI systems in areas like autonomous driving, robotics, and augmented reality, as well as providing cost-effective alternatives to collecting and annotating real-world data.
Papers
NeRF synthesis with shading guidance
Chenbin Li, Yu Xin, Gaoyi Liu, Xiang Zeng, Ligang Liu
Habitat Synthetic Scenes Dataset (HSSD-200): An Analysis of 3D Scene Scale and Realism Tradeoffs for ObjectGoal Navigation
Mukul Khanna, Yongsen Mao, Hanxiao Jiang, Sanjay Haresh, Brennan Shacklett, Dhruv Batra, Alexander Clegg, Eric Undersander, Angel X. Chang, Manolis Savva