Implicit Neural Representation
Implicit neural representations (INRs) leverage neural networks to encode signals as continuous functions of coordinates, aiming to achieve compact, efficient, and resolution-agnostic data representations. Current research focuses on improving INR architectures, such as incorporating convolutional layers or learnable activations, and developing novel algorithms for tasks like super-resolution, image compression, and 3D reconstruction. This approach offers significant advantages in various fields, including medical imaging, computer graphics, and computational fluid dynamics, by enabling efficient storage, manipulation, and generation of high-dimensional data. The development of more robust and efficient INRs promises to advance these fields considerably.
Papers
An Efficient Implicit Neural Representation Image Codec Based on Mixed Autoregressive Model for Low-Complexity Decoding
Xiang Liu, Jiahong Chen, Bin Chen, Zimo Liu, Baoyi An, Shu-Tao Xia, Zhi Wang
DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations
Dogyun Park, Sihyeon Kim, Sojin Lee, Hyunwoo J. Kim