Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
975papers
Papers - Page 17
March 19, 2024
March 18, 2024
ThermoNeRF: Joint RGB and Thermal Novel View Synthesis for Building Facades using Multimodal Neural Radiance Fields
GNeRP: Gaussian-guided Neural Reconstruction of Reflective Objects with Noisy Polarization Priors
Exploring Multi-modal Neural Scene Representations With Applications on Thermal Imaging
BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting
Aerial Lifting: Neural Urban Semantic and Building Instance Lifting from Aerial Imagery
March 17, 2024
March 16, 2024
Fast Sparse View Guided NeRF Update for Object Reconfigurations
ARC-NeRF: Area Ray Casting for Broader Unseen View Coverage in Few-shot Object Rendering
MSI-NeRF: Linking Omni-Depth with View Synthesis through Multi-Sphere Image aided Generalizable Neural Radiance Field
DPPE: Dense Pose Estimation in a Plenoxels Environment using Gradient Approximation
March 15, 2024
Thermal-NeRF: Neural Radiance Fields from an Infrared Camera
Leveraging Neural Radiance Field in Descriptor Synthesis for Keypoints Scene Coordinate Regression
URS-NeRF: Unordered Rolling Shutter Bundle Adjustment for Neural Radiance Fields
DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video