Point Cloud Understanding
Point cloud understanding aims to extract meaningful information from the unstructured, 3D point cloud data often generated by LiDAR or depth sensors. Current research heavily focuses on developing efficient and robust deep learning models, including transformers, convolutional networks adapted for point clouds (like KPConv), and state-space models, often incorporating self-supervised learning and multi-modal approaches (combining point clouds with images). These advancements are crucial for various applications, such as autonomous driving, robotics, and 3D scene reconstruction, by enabling accurate object recognition, segmentation, and scene understanding from raw 3D sensor data.
Papers
Enhancing Local Geometry Learning for 3D Point Cloud via Decoupling Convolution
Haoyi Xiu, Xin Liu, Weimin Wang, Kyoung-Sook Kim, Takayuki Shinohara, Qiong Chang, Masashi Matsuoka
Enhancing Local Feature Learning Using Diffusion for 3D Point Cloud Understanding
Haoyi Xiu, Xin Liu, Weimin Wang, Kyoung-Sook Kim, Takayuki Shinohara, Qiong Chang, Masashi Matsuoka