Based Rendering

Based rendering leverages physically-based models to synthesize realistic images and videos from 3D scenes, aiming to improve the accuracy and efficiency of image generation and analysis. Current research focuses on enhancing the realism of rendered images by incorporating advanced techniques like Gaussian splatting, neural radiance fields (NeRFs), and diffusion models, often addressing challenges such as reflection rendering, material estimation, and accurate lighting simulation. These advancements have significant implications for diverse fields, including robotics (e.g., sensor simulation for autonomous navigation), computer graphics (e.g., high-fidelity 3D asset generation), and material science (e.g., material property estimation from images). The resulting improvements in rendering quality and efficiency are driving progress in various applications requiring photorealistic image synthesis and analysis.

Papers