Paper ID: 2412.18775

ObitoNet: Multimodal High-Resolution Point Cloud Reconstruction

Apoorv Thapliyal, Vinay Lanka, Swathi Baskaran

ObitoNet employs a Cross Attention mechanism to integrate multimodal inputs, where Vision Transformers (ViT) extract semantic features from images and a point cloud tokenizer processes geometric information using Farthest Point Sampling (FPS) and K Nearest Neighbors (KNN) for spatial structure capture. The learned multimodal features are fed into a transformer-based decoder for high-resolution point cloud reconstruction. This approach leverages the complementary strengths of both modalities rich image features and precise geometric details ensuring robust point cloud generation even in challenging conditions such as sparse or noisy data.

Submitted: Dec 25, 2024