Low Light Image Enhancement
Low-light image enhancement (LLIE) aims to improve the quality of images captured in dimly lit conditions, focusing on increasing brightness, detail, and color accuracy while minimizing noise amplification. Recent research heavily utilizes deep learning, employing architectures like U-Nets, transformers, and diffusion models, often incorporating techniques such as Retinex decomposition, state-space models, and contrastive learning to achieve better results, particularly in handling diverse real-world scenarios and high-resolution images. These advancements are crucial for improving the performance of computer vision systems in low-light environments and have significant implications for applications ranging from autonomous driving and surveillance to medical imaging and mobile photography. A growing emphasis is placed on unsupervised and semi-supervised methods to address the scarcity of paired training data.
Papers
AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement
Yunlong Lin, Tian Ye, Sixiang Chen, Zhenqi Fu, Yingying Wang, Wenhao Chai, Zhaohu Xing, Lei Zhu, Xinghao Ding
Dual High-Order Total Variation Model for Underwater Image Restoration
Yuemei Li, Guojia Hou, Peixian Zhuang, Zhenkuan Pan