Low Light
Low-light image and video enhancement aims to improve the quality of visual data captured in poorly illuminated environments, addressing challenges like noise, low contrast, and color distortion. Current research heavily utilizes deep learning, employing various architectures such as transformers, diffusion models, and convolutional neural networks, often incorporating techniques like Retinex decomposition and vector quantization for improved efficiency and robustness. These advancements have significant implications for numerous applications, including autonomous driving, medical imaging, and surveillance, where reliable visual perception in low-light conditions is crucial.
Papers
Low-Light Image Enhancement with Illumination-Aware Gamma Correction and Complete Image Modelling Network
Yinglong Wang, Zhen Liu, Jianzhuang Liu, Songcen Xu, Shuaicheng Liu
Self-Reference Deep Adaptive Curve Estimation for Low-Light Image Enhancement
Jianyu Wen, Chenhao Wu, Tong Zhang, Yixuan Yu, Piotr Swierczynski
FeatEnHancer: Enhancing Hierarchical Features for Object Detection and Beyond Under Low-Light Vision
Khurram Azeem Hashmi, Goutham Kallempudi, Didier Stricker, Muhammamd Zeshan Afzal
Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the Noise Model
Xin Jin, Jia-Wen Xiao, Ling-Hao Han, Chunle Guo, Xialei Liu, Chongyi Li, Ming-Ming Cheng