Low Light Image Enhancement
Low-light image enhancement (LLIE) aims to improve the quality of images captured in dimly lit conditions, focusing on increasing brightness, detail, and color accuracy while minimizing noise amplification. Recent research heavily utilizes deep learning, employing architectures like U-Nets, transformers, and diffusion models, often incorporating techniques such as Retinex decomposition, state-space models, and contrastive learning to achieve better results, particularly in handling diverse real-world scenarios and high-resolution images. These advancements are crucial for improving the performance of computer vision systems in low-light environments and have significant implications for applications ranging from autonomous driving and surveillance to medical imaging and mobile photography. A growing emphasis is placed on unsupervised and semi-supervised methods to address the scarcity of paired training data.
Papers
ClassLIE: Structure- and Illumination-Adaptive Classification for Low-Light Image Enhancement
Zixiang Wei, Yiting Wang, Lichao Sun, Athanasios V. Vasilakos, Lin Wang
ReCo-Diff: Explore Retinex-Based Condition Strategy in Diffusion Model for Low-Light Image Enhancement
Yuhui Wu, Guoqing Wang, Zhiwen Wang, Yang Yang, Tianyu Li, Peng Wang, Chongyi Li, Heng Tao Shen