Implicit Neural Representation
Implicit neural representations (INRs) leverage neural networks to encode signals as continuous functions of coordinates, aiming to achieve compact, efficient, and resolution-agnostic data representations. Current research focuses on improving INR architectures, such as incorporating convolutional layers or learnable activations, and developing novel algorithms for tasks like super-resolution, image compression, and 3D reconstruction. This approach offers significant advantages in various fields, including medical imaging, computer graphics, and computational fluid dynamics, by enabling efficient storage, manipulation, and generation of high-dimensional data. The development of more robust and efficient INRs promises to advance these fields considerably.
Papers
Cascaded Local Implicit Transformer for Arbitrary-Scale Super-Resolution
Hao-Wei Chen, Yu-Syuan Xu, Min-Fong Hong, Yi-Min Tsai, Hsien-Kai Kuo, Chun-Yi Lee
AnyFlow: Arbitrary Scale Optical Flow with Implicit Neural Representation
Hyunyoung Jung, Zhuo Hui, Lei Luo, Haitao Yang, Feng Liu, Sungjoo Yoo, Rakesh Ranjan, Denis Demandolx