View Dependent Appearance
View-dependent appearance modeling focuses on accurately representing how the visual appearance of objects changes with viewpoint, a crucial challenge in computer graphics and vision. Current research emphasizes improving the rendering of specular reflections and anisotropic surfaces using various neural network architectures, including Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting, often incorporating techniques like differentiable mesh extraction and novel encoding methods to enhance efficiency and realism. These advancements are significant for applications such as novel view synthesis, high-fidelity 3D reconstruction, and real-time rendering on resource-constrained devices like mobile phones. The ultimate goal is to create photorealistic and computationally efficient representations of complex scenes with accurate view-dependent effects.
Papers
NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections
Dor Verbin, Pratul P. Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, Jonathan T. Barron
Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling
Liwen Wu, Sai Bi, Zexiang Xu, Fujun Luan, Kai Zhang, Iliyan Georgiev, Kalyan Sunkavalli, Ravi Ramamoorthi