Sparse View Image
Sparse view image processing focuses on reconstructing high-quality images or 3D models from a limited number of input views, addressing the challenge of underconstrained data. Current research emphasizes developing efficient and accurate neural rendering methods, often employing transformer-based architectures or U-Net variations, to improve image quality and 3D reconstruction from sparse data, sometimes incorporating diffusion models for enhanced performance. This field is crucial for applications ranging from medical imaging (e.g., reducing radiation exposure in CT scans) to augmented and virtual reality (e.g., real-time light field generation), where acquiring dense view data is impractical or impossible. The development of robust and efficient algorithms is driving progress in these diverse areas.
Papers
AIM 2024 Sparse Neural Rendering Challenge: Methods and Results
Michal Nazarczuk, Sibi Catley-Chandar, Thomas Tanay, Richard Shaw, Eduardo Pérez-Pellitero, Radu Timofte, Xing Yan, Pan Wang, Yali Guo, Yongxin Wu, Youcheng Cai, Yanan Yang, Junting Li, Yanghong Zhou, P. Y. Mok, Zongqi He, Zhe Xiao, Kin-Chung Chan, Hana Lebeta Goshu, Cuixin Yang, Rongkang Dong, Jun Xiao, Kin-Man Lam, Jiayao Hao, Qiong Gao, Yanyan Zu, Junpei Zhang, Licheng Jiao, Xu Liu, Kuldeep Purohit
AIM 2024 Sparse Neural Rendering Challenge: Dataset and Benchmark
Michal Nazarczuk, Thomas Tanay, Sibi Catley-Chandar, Richard Shaw, Radu Timofte, Eduardo Pérez-Pellitero