Paper ID: 2202.10047

PCSCNet: Fast 3D Semantic Segmentation of LiDAR Point Cloud for Autonomous Car using Point Convolution and Sparse Convolution Network

Jaehyun Park, Chansoo Kim, Kichun Jo

The autonomous car must recognize the driving environment quickly for safe driving. As the Light Detection And Range (LiDAR) sensor is widely used in the autonomous car, fast semantic segmentation of LiDAR point cloud, which is the point-wise classification of the point cloud within the sensor framerate, has attracted attention in recognition of the driving environment. Although the voxel and fusion-based semantic segmentation models are the state-of-the-art model in point cloud semantic segmentation recently, their real-time performance suffer from high computational load due to high voxel resolution. In this paper, we propose the fast voxel-based semantic segmentation model using Point Convolution and 3D Sparse Convolution (PCSCNet). The proposed model is designed to outperform at both high and low voxel resolution using point convolution-based feature extraction. Moreover, the proposed model accelerates the feature propagation using 3D sparse convolution after the feature extraction. The experimental results demonstrate that the proposed model outperforms the state-of-the-art real-time models in semantic segmentation of SemanticKITTI and nuScenes, and achieves the real-time performance in LiDAR point cloud inference.

Submitted: Feb 21, 2022