Synthetic Scene
Synthetic scene generation focuses on creating realistic virtual environments for various applications, primarily driven by the need for large, diverse datasets to train and evaluate computer vision and robotics models. Current research emphasizes developing advanced generative models, including diffusion models and neural radiance fields (NeRFs), often conditioned on semantic maps or other scene representations to improve control and realism. These advancements are crucial for improving the performance of AI systems in areas like autonomous driving, robotics, and augmented reality, as well as providing cost-effective alternatives to collecting and annotating real-world data.
Papers
January 13, 2025
January 11, 2025
December 31, 2024
November 29, 2024
November 26, 2024
November 12, 2024
November 10, 2024
September 23, 2024
June 27, 2024
June 16, 2024
April 10, 2024
March 27, 2024
March 18, 2024
March 12, 2024
March 5, 2024
January 23, 2024
December 4, 2023
July 20, 2023