Low Light
Low-light image and video enhancement aims to improve the quality of visual data captured in poorly illuminated environments, addressing challenges like noise, low contrast, and color distortion. Current research heavily utilizes deep learning, employing various architectures such as transformers, diffusion models, and convolutional neural networks, often incorporating techniques like Retinex decomposition and vector quantization for improved efficiency and robustness. These advancements have significant implications for numerous applications, including autonomous driving, medical imaging, and surveillance, where reliable visual perception in low-light conditions is crucial.
Papers
Super-resolving Real-world Image Illumination Enhancement: A New Dataset and A Conditional Diffusion Model
Yang Liu, Yaofang Liu, Jinshan Pan, Yuxiang Hui, Fan Jia, Raymond H. Chan, Tieyong Zeng
Towards Flexible and Efficient Diffusion Low Light Enhancer
Guanzhou Lan, Qianli Ma, Yuqi Yang, Zhigang Wang, Dong Wang, Yuan Yuan, Bin Zhao
Fast Context-Based Low-Light Image Enhancement via Neural Implicit Representations
Tomáš Chobola, Yu Liu, Hanyi Zhang, Julia A. Schnabel, Tingying Peng
GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval
Han Zhou, Wei Dong, Xiaohong Liu, Shuaicheng Liu, Xiongkuo Min, Guangtao Zhai, Jun Chen