Quality Metric
Quality metrics aim to objectively assess the goodness of various data types, from translations and videos to point clouds and knowledge graphs, often by comparing automated assessments to human judgments or using intrinsic properties of the data. Current research emphasizes developing more interpretable metrics, particularly focusing on addressing biases and limitations in existing methods, often employing machine learning techniques like contrastive learning and incorporating perceptual models of human vision. Improved quality metrics are crucial for advancing numerous fields, enabling better evaluation of AI models, enhancing data quality control, and ultimately leading to more reliable and trustworthy applications across diverse domains.
Papers
Quantifying Quality of Class-Conditional Generative Models in Time-Series Domain
Alireza Koochali, Maria Walch, Sankrutyayan Thota, Peter Schichtel, Andreas Dengel, Sheraz Ahmed
A Survey of Parameters Associated with the Quality of Benchmarks in NLP
Swaroop Mishra, Anjana Arunkumar, Chris Bryan, Chitta Baral