Neural Rendering
Neural rendering aims to synthesize realistic images of 3D scenes from various input data, such as multiple photographs or sparse sensor readings, achieving photorealistic novel view synthesis. Current research heavily focuses on improving the robustness and generalization capabilities of neural radiance fields (NeRFs) and related architectures like Gaussian splatting, addressing challenges like sparse data, dynamic scenes, and real-world imperfections (e.g., motion blur, low light). These advancements have significant implications for applications ranging from augmented and virtual reality to medical imaging and autonomous driving, enabling more accurate and efficient 3D scene representation and manipulation.
Papers
August 30, 2023
August 19, 2023
August 8, 2023
August 7, 2023
July 23, 2023
June 19, 2023
June 16, 2023
June 14, 2023
June 13, 2023
June 5, 2023
May 29, 2023
May 27, 2023
May 26, 2023
May 15, 2023
May 12, 2023
May 5, 2023
May 3, 2023
April 20, 2023