Point Cloud Understanding
Point cloud understanding aims to extract meaningful information from the unstructured, 3D point cloud data often generated by LiDAR or depth sensors. Current research heavily focuses on developing efficient and robust deep learning models, including transformers, convolutional networks adapted for point clouds (like KPConv), and state-space models, often incorporating self-supervised learning and multi-modal approaches (combining point clouds with images). These advancements are crucial for various applications, such as autonomous driving, robotics, and 3D scene reconstruction, by enabling accurate object recognition, segmentation, and scene understanding from raw 3D sensor data.
Papers
November 1, 2024
October 22, 2024
September 8, 2024
August 7, 2024
July 25, 2024
July 18, 2024
July 11, 2024
June 4, 2024
May 21, 2024
March 11, 2024
February 24, 2024
February 15, 2024
December 28, 2023
December 18, 2023
December 15, 2023
December 4, 2023
December 3, 2023
September 29, 2023