Inter Annotator Agreement
Inter-annotator agreement (IAA) measures the consistency of labels assigned by multiple human annotators to the same data, crucial for ensuring data quality in machine learning projects. Current research focuses on improving IAA assessment methods, particularly for complex annotation tasks like those involving images, text, and structured data, often employing metrics like Krippendorff's alpha or Fleiss' kappa and exploring novel approaches to handle incomplete datasets. High IAA is essential for training reliable machine learning models and for building trustworthy datasets across diverse fields, from medical diagnosis to social media analysis, ultimately impacting the validity and generalizability of research findings and practical applications.
Papers
Transcending Traditional Boundaries: Leveraging Inter-Annotator Agreement (IAA) for Enhancing Data Management Operations (DMOps)
Damrin Kim, NamHyeok Kim, Chanjun Park, Harksoo Kim
Inter-Annotator Agreement in the Wild: Uncovering Its Emerging Roles and Considerations in Real-World Scenarios
NamHyeok Kim, Chanjun Park