Ground Truth
"Ground truth" refers to the accurate, verifiable data used to train and evaluate machine learning models. Current research focuses on addressing challenges arising from incomplete, noisy, or changing ground truth data, employing techniques like robust loss functions, self-supervised learning, and data augmentation to improve model accuracy and reliability. These advancements are crucial for various applications, including medical image analysis, autonomous driving, and remote sensing, where obtaining perfect ground truth is often impractical or impossible, impacting the development of robust and reliable AI systems. The development of novel methods for handling imperfect ground truth is a significant area of ongoing research, driving improvements in model performance and generalization across diverse domains.
Papers
Looking Inward: Language Models Can Learn About Themselves by Introspection
Felix J Binder, James Chua, Tomek Korbak, Henry Sleight, John Hughes, Robert Long, Ethan Perez, Miles Turpin, Owain Evans
SAda-Net: A Self-Supervised Adaptive Stereo Estimation CNN For Remote Sensing Image Data
Dominik Hirner, Friedrich Fraundorfer
Disentangling Likes and Dislikes in Personalized Generative Explainable Recommendation
Ryotaro Shimizu, Takashi Wada, Yu Wang, Johannes Kruse, Sean O'Brien, Sai HtaungKham, Linxin Song, Yuya Yoshikawa, Yuki Saito, Fugee Tsung, Masayuki Goto, Julian McAuley
SAGE: Scalable Ground Truth Evaluations for Large Sparse Autoencoders
Constantin Venhoff, Anisoara Calinescu, Philip Torr, Christian Schroeder de Witt
Enhancing Multimodal LLM for Detailed and Accurate Video Captioning using Multi-Round Preference Optimization
Changli Tang, Yixuan Li, Yudong Yang, Jimin Zhuang, Guangzhi Sun, Wei Li, Zujun Ma, Chao Zhang