Dynamic Scene
Dynamic scene representation aims to accurately model and render three-dimensional scenes undergoing changes over time, focusing on efficient and realistic novel view synthesis. Current research heavily utilizes neural implicit representations, such as Neural Radiance Fields (NeRFs) and Gaussian splatting, often incorporating techniques like spatio-temporal modeling, motion factorization, and semantic segmentation to improve accuracy and efficiency, particularly for complex scenes with multiple moving objects. This field is crucial for advancements in autonomous driving, robotics, virtual and augmented reality, and video editing, enabling applications ranging from realistic simulations to interactive 3D content creation.
Papers
November 3, 2023
October 29, 2023
October 23, 2023
October 16, 2023
October 13, 2023
October 12, 2023
October 9, 2023
October 3, 2023
September 29, 2023
September 15, 2023
September 11, 2023
August 28, 2023
August 22, 2023
August 16, 2023
August 14, 2023
August 8, 2023
July 24, 2023
June 13, 2023
May 24, 2023
May 19, 2023