Neural Rendering
Neural rendering aims to synthesize realistic images of 3D scenes from various input data, such as multiple photographs or sparse sensor readings, achieving photorealistic novel view synthesis. Current research heavily focuses on improving the robustness and generalization capabilities of neural radiance fields (NeRFs) and related architectures like Gaussian splatting, addressing challenges like sparse data, dynamic scenes, and real-world imperfections (e.g., motion blur, low light). These advancements have significant implications for applications ranging from augmented and virtual reality to medical imaging and autonomous driving, enabling more accurate and efficient 3D scene representation and manipulation.
Papers
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS
Zhiwen Fan, Kevin Wang, Kairun Wen, Zehao Zhu, Dejia Xu, Zhangyang Wang
Neural Texture Puppeteer: A Framework for Neural Geometry and Texture Rendering of Articulated Shapes, Enabling Re-Identification at Interactive Speed
Urs Waldmann, Ole Johannsen, Bastian Goldluecke
LiveNVS: Neural View Synthesis on Live RGB-D Streams
Laura Fink, Darius Rückert, Linus Franke, Joachim Keinert, Marc Stamminger