Point Cloud Representation
Point cloud representation focuses on efficiently and effectively encoding 3D data as a set of points, enabling analysis and manipulation for various applications. Current research emphasizes developing robust and efficient methods for handling incomplete, noisy data, often leveraging self-supervised learning techniques and incorporating multimodal information (e.g., images, text) to improve representation learning. This is achieved through diverse architectures, including Transformers, convolutional networks, and neural fields, with a strong focus on improving performance in downstream tasks like object detection, segmentation, and classification. The resulting advancements have significant implications for fields such as autonomous driving, robotics, and materials science.
Papers
GSDeformer: Direct, Real-time and Extensible Cage-based Deformation for 3D Gaussian Splatting
Jiajun Huang, Shuolin Xu, Hongchuan Yu, Jian Jun Zhang, Hammadi Nait Charif
3D Unsupervised Learning by Distilling 2D Open-Vocabulary Segmentation Models for Autonomous Driving
Boyi Sun, Yuhang Liu, Xingxia Wang, Bin Tian, Long Chen, Fei-Yue Wang