Depth Annotation
Depth annotation, the process of assigning depth values to pixels in images, is crucial for numerous applications, including autonomous driving and robotics, but obtaining accurate annotations is often expensive and time-consuming. Current research focuses on self-supervised and semi-supervised learning methods, leveraging techniques like teacher-student architectures, knowledge distillation, and consistency regularization to reduce reliance on labeled data. These approaches often employ deep neural networks, including convolutional neural networks (CNNs) and transformers, sometimes in combination, to estimate depth from single or multiple views, even in challenging scenarios like underwater environments or with limited data. Advances in depth annotation are driving progress in 3D scene reconstruction, object detection, and other computer vision tasks, ultimately improving the accuracy and robustness of various applications.
Papers
SANPO: A Scene Understanding, Accessibility, Navigation, Pathfinding, Obstacle Avoidance Dataset
Sagar M. Waghmare, Kimberly Wilber, Dave Hawkey, Xuan Yang, Matthew Wilson, Stephanie Debats, Cattalyya Nuengsigkapian, Astuti Sharma, Lars Pandikow, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko
NeuralLabeling: A versatile toolset for labeling vision datasets using Neural Radiance Fields
Floris Erich, Naoya Chiba, Yusuke Yoshiyasu, Noriaki Ando, Ryo Hanai, Yukiyasu Domae