Video Super Resolution
Video super-resolution (VSR) aims to enhance the resolution of low-resolution videos, improving visual quality for applications like streaming and broadcasting. Current research emphasizes developing efficient algorithms, often employing recurrent neural networks, transformers, and generative adversarial networks (GANs), to achieve real-time performance and high fidelity, particularly on resource-constrained devices. This field is significant due to its potential to improve the viewing experience of low-quality video content across various platforms and devices, and ongoing work focuses on addressing challenges like temporal consistency, artifact reduction, and efficient model design.
Papers
Boosting Video Super Resolution with Patch-Based Temporal Redundancy Optimization
Yuhao Huang, Hang Dong, Jinshan Pan, Chao Zhu, Yu Guo, Ding Liu, Lean Fu, Fei Wang
Geometry-Aware Reference Synthesis for Multi-View Image Super-Resolution
Ri Cheng, Yuqi Sun, Bo Yan, Weimin Tan, Chenxi Ma
Rethinking Alignment in Video Super-Resolution Transformers
Shuwei Shi, Jinjin Gu, Liangbin Xie, Xintao Wang, Yujiu Yang, Chao Dong
Combining Contrastive and Supervised Learning for Video Super-Resolution Detection
Viacheslav Meshchaninov, Ivan Molodetskikh, Dmitriy Vatolin
Unsupervised Flow-Aligned Sequence-to-Sequence Learning for Video Restoration
Jing Lin, Xiaowan Hu, Yuanhao Cai, Haoqian Wang, Youliang Yan, Xueyi Zou, Yulun Zhang, Luc Van Gool