Paper ID: 2312.03608
Automated Multimodal Data Annotation via Calibration With Indoor Positioning System
Ryan Rubel, Andrew Dudash, Mohammad Goli, James O'Hara, Karl Wunderlich
Learned object detection methods based on fusion of LiDAR and camera data require labeled training samples, but niche applications, such as warehouse robotics or automated infrastructure, require semantic classes not available in large existing datasets. Therefore, to facilitate the rapid creation of multimodal object detection datasets and alleviate the burden of human labeling, we propose a novel automated annotation pipeline. Our method uses an indoor positioning system (IPS) to produce accurate detection labels for both point clouds and images and eliminates manual annotation entirely. In an experiment, the system annotates objects of interest 261.8 times faster than a human baseline and speeds up end-to-end dataset creation by 61.5%.
Submitted: Dec 6, 2023