Photorealistic Rendering
Photorealistic rendering aims to generate highly realistic images of 3D scenes, focusing on accurately simulating light interactions and material properties. Current research emphasizes efficient sampling techniques, such as importance sampling with neural networks and Gaussian splatting, to improve rendering speed and quality, particularly for dynamic scenes and complex geometries like human avatars and urban environments. These advancements are driving progress in applications ranging from virtual and augmented reality to autonomous navigation and 3D modeling, enabling more immersive and interactive experiences.
Papers
$C^{3}$-NeRF: Modeling Multiple Scenes via Conditional-cum-Continual Neural Radiance Fields
Prajwal Singh, Ashish Tiwari, Gautam Vashishtha, Shanmuganathan Raman
TexGaussian: Generating High-quality PBR Material via Octree-based 3D Gaussian Splatting
Bojun Xiong, Jialun Liu, Jiakui Hu, Chenming Wu, Jinbo Wu, Xing Liu, Chen Zhao, Errui Ding, Zhouhui Lian
MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing
Cong Wang, Di Kang, He-Yi Sun, Shen-Han Qian, Zi-Xuan Wang, Linchao Bao, Song-Hai Zhang
Mesh-based Photorealistic and Real-time 3D Mapping for Robust Visual Perception of Autonomous Underwater Vehicle
Jungwoo Lee, Younggun Cho