Intrinsic Image
Intrinsic image decomposition aims to separate a photograph into its underlying components, primarily albedo (surface reflectance) and shading (illumination), providing a more physically meaningful representation of a scene. Current research focuses on improving the accuracy and robustness of this decomposition using deep learning models, particularly diffusion models and generative adversarial networks (GANs), often incorporating physically-based priors or illumination-invariant features to address the ill-posed nature of the problem. This research is significant for applications in image editing, compositing, rendering, and computer vision tasks requiring a deeper understanding of scene geometry and material properties.
Papers
October 10, 2024
May 1, 2024
December 6, 2023
June 15, 2023
June 1, 2023
May 20, 2022
April 8, 2022
March 18, 2022
March 9, 2022
December 7, 2021