Dynamic Scene Representation
Dynamic scene representation focuses on creating computer models of scenes that change over time, aiming for accurate and efficient capture of both geometry and appearance. Current research emphasizes novel neural network architectures, including neural radiance fields (NeRFs) and Gaussian splatting, often incorporating techniques like diffusion models for view extrapolation and structured language models for scene description. These advancements enable applications such as high-fidelity 3D reconstruction from limited views, real-time rendering of dynamic scenes, and improved robotic scene understanding, impacting fields like computer vision, robotics, and virtual reality.
Papers
Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE
Yuling Gu, Yao Fu, Valentina Pyatkin, Ian Magnusson, Bhavana Dalvi Mishra, Peter Clark
NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields
Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong Yuan, Yi Xu, Andreas Geiger