Paper ID: 2410.19801
Radon Implicit Field Transform (RIFT): Learning Scenes from Radar Signals
Daqian Bao, Alex Saad-Falcon, Justin Romberg
Data acquisition in array signal processing (ASP) is costly, as high angular and range resolutions require large antenna apertures and wide frequency bandwidths. Data requirements grow multiplicatively with viewpoints and frequencies, increasing collection burdens. Implicit Neural Representations (INRs)--neural network models of 3D scenes--offer compact, continuous representations with minimal data, interpolating to unseen viewpoints, potentially reducing sampling costs in ASP. We propose the Radon Implicit Field Transform (RIFT), combining a radar forward model (Generalized Radon Transform, GRT) with an INR-based scene representation learned from radar signals. This method extends to other ASP problems by replacing the GRT with appropriate algorithms. In experiments, we synthesize radar data using the GRT and train the INR model by minimizing radar signal reconstruction error. We render the scene using the trained INR and evaluate it against ground truth. We introduce new error metrics: phase-Root Mean Square Error (p-RMSE) and magnitude-Structural Similarity Index Measure (m-SSIM). Compared to traditional scene models, our RIFT model achieves up to 188% improvement in scene reconstruction with only 10% of the data. Using the same amount of data, RIFT achieves 3x better reconstruction and shows a 10% improvement when generalizing to unseen viewpoints.
Submitted: Oct 16, 2024