Quality Metric
Quality metrics aim to objectively assess the goodness of various data types, from translations and videos to point clouds and knowledge graphs, often by comparing automated assessments to human judgments or using intrinsic properties of the data. Current research emphasizes developing more interpretable metrics, particularly focusing on addressing biases and limitations in existing methods, often employing machine learning techniques like contrastive learning and incorporating perceptual models of human vision. Improved quality metrics are crucial for advancing numerous fields, enabling better evaluation of AI models, enhancing data quality control, and ultimately leading to more reliable and trustworthy applications across diverse domains.
Papers
A Reference-less Quality Metric for Automatic Speech Recognition via Contrastive-Learning of a Multi-Language Model with Self-Supervision
Kamer Ali Yuksel, Thiago Ferreira, Ahmet Gunduz, Mohamed Al-Badrashiny, Golara Javadi
NoRefER: a Referenceless Quality Metric for Automatic Speech Recognition via Semi-Supervised Language Model Fine-Tuning with Contrastive Learning
Kamer Ali Yuksel, Thiago Ferreira, Golara Javadi, Mohamed El-Badrashiny, Ahmet Gunduz
Quantifying Quality of Class-Conditional Generative Models in Time-Series Domain
Alireza Koochali, Maria Walch, Sankrutyayan Thota, Peter Schichtel, Andreas Dengel, Sheraz Ahmed
A Survey of Parameters Associated with the Quality of Benchmarks in NLP
Swaroop Mishra, Anjana Arunkumar, Chris Bryan, Chitta Baral