Interactive Rendering
Interactive rendering aims to generate realistic 3D images in real-time, enabling applications like virtual and augmented reality. Current research focuses on developing compact and efficient scene representations, such as neural radiance fields (NeRFs) and Gaussian splatting, often incorporating techniques like level of detail (LOD) and foveated rendering to optimize performance across different hardware capabilities. These advancements are significantly impacting fields like computer graphics, robotics, and virtual production by enabling faster and more resource-efficient creation and manipulation of 3D content.
Papers
DN-4DGS: Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering
Jiahao Lu, Jiacheng Deng, Ruijie Zhu, Yanzhe Liang, Wenfei Yang, Tianzhu Zhang, Xu Zhou
Hybrid bundle-adjusting 3D Gaussians for view consistent rendering with pose optimization
Yanan Guo, Ying Xie, Ying Chang, Benkui Zhang, Bo Jia, Lin Cao