Dynamic Scene
Dynamic scene representation aims to accurately model and render three-dimensional scenes undergoing changes over time, focusing on efficient and realistic novel view synthesis. Current research heavily utilizes neural implicit representations, such as Neural Radiance Fields (NeRFs) and Gaussian splatting, often incorporating techniques like spatio-temporal modeling, motion factorization, and semantic segmentation to improve accuracy and efficiency, particularly for complex scenes with multiple moving objects. This field is crucial for advancements in autonomous driving, robotics, virtual and augmented reality, and video editing, enabling applications ranging from realistic simulations to interactive 3D content creation.
Papers
November 25, 2022
November 21, 2022
November 8, 2022
November 7, 2022
November 2, 2022
October 26, 2022
October 21, 2022
October 16, 2022
October 10, 2022
October 9, 2022
October 1, 2022
September 29, 2022
September 27, 2022
September 24, 2022
September 20, 2022
August 30, 2022
August 26, 2022
August 23, 2022
August 19, 2022