Ground Truth
"Ground truth" refers to the accurate, verifiable data used to train and evaluate machine learning models. Current research focuses on addressing challenges arising from incomplete, noisy, or changing ground truth data, employing techniques like robust loss functions, self-supervised learning, and data augmentation to improve model accuracy and reliability. These advancements are crucial for various applications, including medical image analysis, autonomous driving, and remote sensing, where obtaining perfect ground truth is often impractical or impossible, impacting the development of robust and reliable AI systems. The development of novel methods for handling imperfect ground truth is a significant area of ongoing research, driving improvements in model performance and generalization across diverse domains.
Papers
SAGE: Scalable Ground Truth Evaluations for Large Sparse Autoencoders
Constantin Venhoff, Anisoara Calinescu, Philip Torr, Christian Schroeder de Witt
Enhancing Multimodal LLM for Detailed and Accurate Video Captioning using Multi-Round Preference Optimization
Changli Tang, Yixuan Li, Yudong Yang, Jimin Zhuang, Guangzhi Sun, Wei Li, Zujun Ma, Chao Zhang
AIM 2024 Sparse Neural Rendering Challenge: Methods and Results
Michal Nazarczuk, Sibi Catley-Chandar, Thomas Tanay, Richard Shaw, Eduardo Pérez-Pellitero, Radu Timofte, Xing Yan, Pan Wang, Yali Guo, Yongxin Wu, Youcheng Cai, Yanan Yang, Junting Li, Yanghong Zhou, P. Y. Mok, Zongqi He, Zhe Xiao, Kin-Chung Chan, Hana Lebeta Goshu, Cuixin Yang, Rongkang Dong, Jun Xiao, Kin-Man Lam, Jiayao Hao, Qiong Gao, Yanyan Zu, Junpei Zhang, Licheng Jiao, Xu Liu, Kuldeep Purohit
Semi-supervised Learning For Robust Speech Evaluation
Huayun Zhang, Jeremy H.M. Wong, Geyu Lin, Nancy F. Chen