Neural Rendering
Neural rendering aims to synthesize realistic images of 3D scenes from various input data, such as multiple photographs or sparse sensor readings, achieving photorealistic novel view synthesis. Current research heavily focuses on improving the robustness and generalization capabilities of neural radiance fields (NeRFs) and related architectures like Gaussian splatting, addressing challenges like sparse data, dynamic scenes, and real-world imperfections (e.g., motion blur, low light). These advancements have significant implications for applications ranging from augmented and virtual reality to medical imaging and autonomous driving, enabling more accurate and efficient 3D scene representation and manipulation.
Papers
July 28, 2022
July 21, 2022
July 3, 2022
June 30, 2022
June 27, 2022
June 25, 2022
June 15, 2022
June 13, 2022
June 10, 2022
June 9, 2022
June 4, 2022
May 30, 2022
May 11, 2022
May 5, 2022
April 22, 2022
April 4, 2022
April 3, 2022