Neural Rendering
Neural rendering aims to synthesize realistic images of 3D scenes from various input data, such as multiple photographs or sparse sensor readings, achieving photorealistic novel view synthesis. Current research heavily focuses on improving the robustness and generalization capabilities of neural radiance fields (NeRFs) and related architectures like Gaussian splatting, addressing challenges like sparse data, dynamic scenes, and real-world imperfections (e.g., motion blur, low light). These advancements have significant implications for applications ranging from augmented and virtual reality to medical imaging and autonomous driving, enabling more accurate and efficient 3D scene representation and manipulation.
Papers
September 23, 2024
September 21, 2024
September 18, 2024
September 16, 2024
August 1, 2024
July 30, 2024
July 29, 2024
July 26, 2024
July 23, 2024
July 17, 2024
July 11, 2024
July 2, 2024
July 1, 2024
June 29, 2024
June 13, 2024
June 8, 2024
May 30, 2024
May 28, 2024
May 20, 2024