Ground Truth
"Ground truth" refers to the accurate, verifiable data used to train and evaluate machine learning models. Current research focuses on addressing challenges arising from incomplete, noisy, or changing ground truth data, employing techniques like robust loss functions, self-supervised learning, and data augmentation to improve model accuracy and reliability. These advancements are crucial for various applications, including medical image analysis, autonomous driving, and remote sensing, where obtaining perfect ground truth is often impractical or impossible, impacting the development of robust and reliable AI systems. The development of novel methods for handling imperfect ground truth is a significant area of ongoing research, driving improvements in model performance and generalization across diverse domains.
Papers
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
Shenyuan Gao, Jiazhi Yang, Li Chen, Kashyap Chitta, Yihang Qiu, Andreas Geiger, Jun Zhang, Hongyang Li
Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth
Zhaoyang Sun, Shengwu Xiong, Yaxiong Chen, Yi Rong
Quater-GCN: Enhancing 3D Human Pose Estimation with Orientation and Semi-supervised Training
Xingyu Song, Zhan Li, Shi Chen, Kazuyuki Demachi
Mapping New Realities: Ground Truth Image Creation with Pix2Pix Image-to-Image Translation
Zhenglin Li, Bo Guan, Yuanzhou Wei, Yiming Zhou, Jingyu Zhang, Jinxin Xu