Radar Point Cloud
Radar point clouds, representing 3D scenes using radar sensor data, are central to advancing autonomous driving and robotics. Current research focuses on overcoming the inherent sparsity and noise of radar data through techniques like point cloud upsampling, innovative feature extraction methods (e.g., using graph neural networks and attention mechanisms), and multi-modal fusion with camera or LiDAR data. These improvements are crucial for enhancing object detection, scene understanding, and mapping capabilities, ultimately leading to more robust and reliable autonomous systems.
Papers
Enhanced Radar Perception via Multi-Task Learning: Towards Refined Data for Sensor Fusion Applications
Huawei Sun, Hao Feng, Gianfranco Mauro, Julius Ott, Georg Stettinger, Lorenzo Servadei, Robert Wille
Diffusion-Based Point Cloud Super-Resolution for mmWave Radar Data
Kai Luan, Chenghao Shi, Neng Wang, Yuwei Cheng, Huimin Lu, Xieyuanli Chen