Neural Rendering
Neural rendering aims to synthesize realistic images of 3D scenes from various input data, such as multiple photographs or sparse sensor readings, achieving photorealistic novel view synthesis. Current research heavily focuses on improving the robustness and generalization capabilities of neural radiance fields (NeRFs) and related architectures like Gaussian splatting, addressing challenges like sparse data, dynamic scenes, and real-world imperfections (e.g., motion blur, low light). These advancements have significant implications for applications ranging from augmented and virtual reality to medical imaging and autonomous driving, enabling more accurate and efficient 3D scene representation and manipulation.
182papers
Papers
March 31, 2025
March 24, 2025
March 13, 2025
March 11, 2025
March 9, 2025
March 5, 2025
February 17, 2025
February 16, 2025
December 31, 2024
December 24, 2024
December 18, 2024
December 12, 2024
December 4, 2024