LiDAR View Synthesis

LiDAR view synthesis aims to generate novel LiDAR point cloud views from existing scans, enabling applications like autonomous driving simulation and data augmentation. Current research focuses on developing neural network architectures, often based on Neural Radiance Fields (NeRFs), to implicitly represent and render 3D scenes from LiDAR data, addressing challenges like sparse data, dynamic scenes, and accurate geometry preservation. These methods often incorporate geometric constraints and multimodal fusion with other sensor data (e.g., cameras, radar) to improve accuracy and realism. The resulting advancements have significant implications for robotics, autonomous systems, and 3D scene understanding by providing synthetic data for training and testing, and enabling more robust perception in challenging environments.

Papers