Implicit Neural Representation
Implicit neural representations (INRs) leverage neural networks to encode signals as continuous functions of coordinates, aiming to achieve compact, efficient, and resolution-agnostic data representations. Current research focuses on improving INR architectures, such as incorporating convolutional layers or learnable activations, and developing novel algorithms for tasks like super-resolution, image compression, and 3D reconstruction. This approach offers significant advantages in various fields, including medical imaging, computer graphics, and computational fluid dynamics, by enabling efficient storage, manipulation, and generation of high-dimensional data. The development of more robust and efficient INRs promises to advance these fields considerably.
Papers
CoordX: Accelerating Implicit Neural Representation with a Split MLP Architecture
Ruofan Liang, Hongyi Sun, Nandita Vijaykumar
From data to functa: Your data point is a function and you can treat it like one
Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Rezende, Dan Rosenbaum
Time-Series Anomaly Detection with Implicit Neural Representation
Kyeong-Joong Jeong, Yong-Min Shin