Ground Truth
"Ground truth" refers to the accurate, verifiable data used to train and evaluate machine learning models. Current research focuses on addressing challenges arising from incomplete, noisy, or changing ground truth data, employing techniques like robust loss functions, self-supervised learning, and data augmentation to improve model accuracy and reliability. These advancements are crucial for various applications, including medical image analysis, autonomous driving, and remote sensing, where obtaining perfect ground truth is often impractical or impossible, impacting the development of robust and reliable AI systems. The development of novel methods for handling imperfect ground truth is a significant area of ongoing research, driving improvements in model performance and generalization across diverse domains.
Papers
Automating Governing Knowledge Commons and Contextual Integrity (GKC-CI) Privacy Policy Annotations with Large Language Models
Jake Chanenson, Madison Pickering, Noah Apthorpe
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with Ground Truth Explanations Datasets
M. Miró-Nicolau, A. Jaume-i-Capó, G. Moyà-Alcover
Learning Real-World Image De-Weathering with Imperfect Supervision
Xiaohui Liu, Zhilu Zhang, Xiaohe Wu, Chaoyu Feng, Xiaotao Wang, Lei Lei, Wangmeng Zuo
Object Pose Estimation Annotation Pipeline for Multi-view Monocular Camera Systems in Industrial Settings
Hazem Youssef, Frederik Polachowski, Jérôme Rutinowski, Moritz Roidl, Christopher Reining
Pre-Training LiDAR-Based 3D Object Detectors Through Colorization
Tai-Yu Pan, Chenyang Ma, Tianle Chen, Cheng Perng Phoo, Katie Z Luo, Yurong You, Mark Campbell, Kilian Q. Weinberger, Bharath Hariharan, Wei-Lun Chao