Dynamic Scene
Dynamic scene representation aims to accurately model and render three-dimensional scenes undergoing changes over time, focusing on efficient and realistic novel view synthesis. Current research heavily utilizes neural implicit representations, such as Neural Radiance Fields (NeRFs) and Gaussian splatting, often incorporating techniques like spatio-temporal modeling, motion factorization, and semantic segmentation to improve accuracy and efficiency, particularly for complex scenes with multiple moving objects. This field is crucial for advancements in autonomous driving, robotics, virtual and augmented reality, and video editing, enabling applications ranging from realistic simulations to interactive 3D content creation.
Papers
May 3, 2023
April 30, 2023
April 18, 2023
April 10, 2023
April 1, 2023
March 26, 2023
March 9, 2023
March 8, 2023
March 6, 2023
March 4, 2023
March 2, 2023
February 27, 2023
February 4, 2023
January 23, 2023
January 14, 2023
January 5, 2023
January 1, 2023
December 7, 2022
December 5, 2022
November 26, 2022