Implicit Neural Representation
Implicit neural representations (INRs) leverage neural networks to encode signals as continuous functions of coordinates, aiming to achieve compact, efficient, and resolution-agnostic data representations. Current research focuses on improving INR architectures, such as incorporating convolutional layers or learnable activations, and developing novel algorithms for tasks like super-resolution, image compression, and 3D reconstruction. This approach offers significant advantages in various fields, including medical imaging, computer graphics, and computational fluid dynamics, by enabling efficient storage, manipulation, and generation of high-dimensional data. The development of more robust and efficient INRs promises to advance these fields considerably.
Papers
Modeling the Neonatal Brain Development Using Implicit Neural Representations
Florentin Bieder, Paul Friedrich, Hélène Corbaz, Alicia Durrer, Julia Wolleb, Philippe C. Cattin
Reference-free Axial Super-resolution of 3D Microscopy Images using Implicit Neural Representation with a 2D Diffusion Prior
Kyungryun Lee, Won-Ki Jeong
Implicit Neural Representation For Accurate CFD Flow Field Prediction
Laurent de Vito, Nils Pinnau, Simone Dey
Uncertainty-Informed Volume Visualization using Implicit Neural Representation
Shanu Saklani, Chitwan Goel, Shrey Bansal, Zhe Wang, Soumya Dutta, Tushar M. Athawale, David Pugmire, Christopher R. Johnson