Human Annotation
Human annotation, the process of labeling data for machine learning, is crucial but expensive and time-consuming. Current research focuses on mitigating this bottleneck through techniques like active learning, which prioritizes the most informative data points for human labeling, and the integration of large language models (LLMs) to automate or assist in the annotation process, including generating synthetic data or pre-annotating samples. These advancements aim to improve the efficiency and scalability of data annotation, ultimately accelerating the development and deployment of AI models across various domains, from natural language processing to medical image analysis. The resulting improvements in data quality and reduced annotation costs have significant implications for the broader AI research community and numerous practical applications.
Papers - Page 2
HAUR: Human Annotation Understanding and Recognition Through Text-Heavy Images
Yuchen Yang, Haoran Yan, Yanhao Chen, Qingqiang Wu, Qingqi HongEvalMuse-40K: A Reliable and Fine-Grained Benchmark with Comprehensive Human Annotations for Text-to-Image Generation Model Evaluation
Shuhao Han, Haotian Fan, Jiachen Fu, Liang Li, Tao Li, Junhui Cui, Yunqiu Wang, Yang Tai, Jingwei Sun, Chunle Guo, Chongyi Li
NAVCON: A Cognitively Inspired and Linguistically Grounded Corpus for Vision and Language Navigation
Karan Wanchoo, Xiaoye Zuo, Hannah Gonzalez, Soham Dan, Georgios Georgakis, Dan Roth, Kostas Daniilidis, Eleni MiltsakakiOmniEval: An Omnidirectional and Automatic RAG Evaluation Benchmark in Financial Domain
Shuting Wang, Jiejun Tan, Zhicheng Dou, Ji-Rong Wen