Point Cloud
Point clouds are collections of 3D data points representing objects or scenes, primarily used for tasks like 3D reconstruction, object recognition, and autonomous navigation. Current research focuses on improving the efficiency and robustness of point cloud processing, employing techniques like deep learning (e.g., transformers, convolutional neural networks), optimal transport, and Gaussian splatting for tasks such as registration, completion, and compression. These advancements are crucial for applications ranging from robotics and autonomous driving to medical imaging and cultural heritage preservation, enabling more accurate and efficient analysis of complex 3D data.
Papers
A Conditional Denoising Diffusion Probabilistic Model for Point Cloud Upsampling
Wentao Qu, Yuantian Shao, Lingwu Meng, Xiaoshui Huang, Liang Xiao
A Review and A Robust Framework of Data-Efficient 3D Scene Parsing with Traditional/Learned 3D Descriptors
Kangcheng Liu
A Data-efficient Framework for Robotics Large-scale LiDAR Scene Parsing
Kangcheng Liu
Coloring the Past: Neural Historical Buildings Reconstruction from Archival Photography
David Komorowicz, Lu Sang, Ferdinand Maiwald, Daniel Cremers
Spherical Frustum Sparse Convolution Network for LiDAR Point Cloud Semantic Segmentation
Yu Zheng, Guangming Wang, Jiuming Liu, Marc Pollefeys, Hesheng Wang
UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition
Xiaohan Ding, Yiyuan Zhang, Yixiao Ge, Sijie Zhao, Lin Song, Xiangyu Yue, Ying Shan
Progressive Target-Styled Feature Augmentation for Unsupervised Domain Adaptation on Point Clouds
Zicheng Wang, Zhen Zhao, Yiming Wu, Luping Zhou, Dong Xu