Dynamic Scene
Dynamic scene representation aims to accurately model and render three-dimensional scenes undergoing changes over time, focusing on efficient and realistic novel view synthesis. Current research heavily utilizes neural implicit representations, such as Neural Radiance Fields (NeRFs) and Gaussian splatting, often incorporating techniques like spatio-temporal modeling, motion factorization, and semantic segmentation to improve accuracy and efficiency, particularly for complex scenes with multiple moving objects. This field is crucial for advancements in autonomous driving, robotics, virtual and augmented reality, and video editing, enabling applications ranging from realistic simulations to interactive 3D content creation.
Papers
November 13, 2024
November 12, 2024
November 4, 2024
October 28, 2024
October 23, 2024
October 18, 2024
October 17, 2024
October 9, 2024
September 27, 2024
September 26, 2024
September 8, 2024
August 5, 2024
August 2, 2024
July 24, 2024
July 18, 2024
July 15, 2024
July 11, 2024
June 13, 2024
June 6, 2024