Implicit Neural Representation
Implicit neural representations (INRs) leverage neural networks to encode signals as continuous functions of coordinates, aiming to achieve compact, efficient, and resolution-agnostic data representations. Current research focuses on improving INR architectures, such as incorporating convolutional layers or learnable activations, and developing novel algorithms for tasks like super-resolution, image compression, and 3D reconstruction. This approach offers significant advantages in various fields, including medical imaging, computer graphics, and computational fluid dynamics, by enabling efficient storage, manipulation, and generation of high-dimensional data. The development of more robust and efficient INRs promises to advance these fields considerably.
Papers
Modeling the Neonatal Brain Development Using Implicit Neural Representations
Florentin Bieder, Paul Friedrich, Hélène Corbaz, Alicia Durrer, Julia Wolleb, Philippe C. Cattin
Reference-free Axial Super-resolution of 3D Microscopy Images using Implicit Neural Representation with a 2D Diffusion Prior
Kyungryun Lee, Won-Ki Jeong