Lighting Representation

Lighting representation in computer vision and graphics aims to accurately capture and manipulate light in scenes for realistic rendering and image manipulation. Current research focuses on developing efficient and robust representations, often employing neural networks (e.g., neural radiance fields, multilayer perceptrons) and physically-based models (e.g., signed distance fields, BRDFs) to represent both global illumination and spatially-varying lighting effects. These advancements are improving the quality of inverse rendering, relighting, and augmented reality applications by enabling more accurate reconstruction of scene geometry, materials, and lighting conditions from images or other input data. The resulting improvements in realism and control have significant implications for fields like virtual and augmented reality, computer-generated imagery, and material science.

Papers