Scene Synthesis
Scene synthesis focuses on generating realistic and diverse 3D scenes, aiming to create virtual environments for various applications, from robotics training to video game development. Current research heavily utilizes diffusion models, often incorporating conditional inputs like floor plans, text descriptions, or even human motion, to guide the generation process and improve controllability. These advancements are significantly impacting fields like computer vision, robotics, and virtual/augmented reality by providing high-quality synthetic data for training and development, and enabling more immersive and interactive experiences.
Papers
November 7, 2024
October 17, 2024
October 3, 2024
September 26, 2024
May 31, 2024
May 27, 2024
April 15, 2024
April 1, 2024
March 19, 2024
March 18, 2024
February 7, 2024
January 24, 2024
January 23, 2024
December 13, 2023
November 30, 2023
October 24, 2023
August 8, 2023
May 22, 2023
March 24, 2023