Lighting Representation
Lighting representation in computer vision and graphics aims to accurately capture and manipulate light in scenes for realistic rendering and image manipulation. Current research focuses on developing efficient and robust representations, often employing neural networks (e.g., neural radiance fields, multilayer perceptrons) and physically-based models (e.g., signed distance fields, BRDFs) to represent both global illumination and spatially-varying lighting effects. These advancements are improving the quality of inverse rendering, relighting, and augmented reality applications by enabling more accurate reconstruction of scene geometry, materials, and lighting conditions from images or other input data. The resulting improvements in realism and control have significant implications for fields like virtual and augmented reality, computer-generated imagery, and material science.
Papers
PBIR-NIE: Glossy Object Capture under Non-Distant Lighting
Guangyan Cai, Fujun Luan, Miloš Hašan, Kai Zhang, Sai Bi, Zexiang Xu, Iliyan Georgiev, Shuang Zhao
MAIR++: Improving Multi-view Attention Inverse Rendering with Implicit Lighting Representation
JunYong Choi, SeokYeong Lee, Haesol Park, Seung-Won Jung, Ig-Jae Kim, Junghyun Cho