Ground Truth
"Ground truth" refers to the accurate, verifiable data used to train and evaluate machine learning models. Current research focuses on addressing challenges arising from incomplete, noisy, or changing ground truth data, employing techniques like robust loss functions, self-supervised learning, and data augmentation to improve model accuracy and reliability. These advancements are crucial for various applications, including medical image analysis, autonomous driving, and remote sensing, where obtaining perfect ground truth is often impractical or impossible, impacting the development of robust and reliable AI systems. The development of novel methods for handling imperfect ground truth is a significant area of ongoing research, driving improvements in model performance and generalization across diverse domains.
Papers
Tree-of-Code: A Tree-Structured Exploring Framework for End-to-End Code Generation and Execution in Complex Task Handling
Ziyi Ni, Yifan Li, Ning Yang, Dou Shen, Pin Lv, Daxiang Dong
A Comparative Study of DSPy Teleprompter Algorithms for Aligning Large Language Models Evaluation Metrics to Human Evaluation
Bhaskarjit Sarmah, Kriti Dutta, Anna Grigoryan, Sachin Tiwari, Stefano Pasquali, Dhagash Mehta