Paper ID: 2410.09771

Magnituder Layers for Implicit Neural Representations in 3D

Sang Min Kim (1), Byeongchan Kim (1), Arijit Sehanobish (2), Krzysztof Choromanski (3 and 4), Dongseok Shim (1), Avinava Dubey (5), Min-hwan Oh (1) ((1) Seoul National University, (2) Independent Researcher, (3) Google DeepMind, (4) Columbia University, (5) Google Research)

Improving the efficiency and performance of implicit neural representations in 3D, particularly Neural Radiance Fields (NeRF) and Signed Distance Fields (SDF) is crucial for enabling their use in real-time applications. These models, while capable of generating photo-realistic novel views and detailed 3D reconstructions, often suffer from high computational costs and slow inference times. To address this, we introduce a novel neural network layer called the "magnituder", designed to reduce the number of training parameters in these models without sacrificing their expressive power. By integrating magnituders into standard feed-forward layer stacks, we achieve improved inference speed and adaptability. Furthermore, our approach enables a zero-shot performance boost in trained implicit neural representation models through layer-wise knowledge transfer without backpropagation, leading to more efficient scene reconstruction in dynamic environments.

Submitted: Oct 13, 2024