Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
975papers
Papers - Page 36
May 7, 2023
May 4, 2023
May 2, 2023
April 28, 2023
April 27, 2023
Learning a Diffusion Prior for NeRFs
Guandao Yang, Abhijit Kundu, Leonidas J. Guibas, Jonathan T. Barron, Ben PooleCombining HoloLens with Instant-NeRFs: Advanced Real-Time 3D Mobile Mapping
Dennis Haitz, Boris Jutzi, Markus Ulrich, Miriam Jaeger, Patrick HuebnerCompositional 3D Human-Object Neural Animation
Zhi Hou, Baosheng Yu, Dacheng TaoContraNeRF: 3D-Aware Generative Model via Contrastive Learning with Unsupervised Implicit Pose Embedding
Mijeong Kim, Hyunjoon Lee, Bohyung Han
April 25, 2023
April 24, 2023
TextMesh: Generation of Realistic 3D Meshes From Text Prompts
Christina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, Michael Niemeyer, Federico TombariHOSNeRF: Dynamic Human-Object-Scene Neural Radiance Fields from a Single Video
Jia-Wei Liu, Yan-Pei Cao, Tianyuan Yang, Eric Zhongcong Xu, Jussi Keppo, Ying Shan, Xiaohu Qie, Mike Zheng Shou
April 22, 2023