Inter Annotator
Inter-annotator agreement (IAA) measures the consistency of labels assigned by multiple human annotators to the same data, a crucial aspect in many fields requiring subjective judgments, such as emotion recognition and grammaticality assessment. Current research focuses on developing robust IAA metrics for complex data types (e.g., images, text, structured data) and improving models that account for annotator variability, often employing Bayesian neural networks or transformer-based architectures. High IAA is essential for creating reliable training datasets for machine learning models and ensuring the validity of research findings across diverse annotation tasks, ultimately improving the accuracy and reliability of automated systems.
Papers
August 21, 2024
March 27, 2024
March 21, 2024
December 15, 2022
October 12, 2022
September 30, 2022