Metric Evaluation
Metric evaluation assesses the performance of algorithms and models, aiming to develop robust and reliable methods for comparing and ranking different approaches. Current research focuses on addressing limitations of existing metrics, such as sensitivity to data variance and the need for more holistic evaluations encompassing multiple aspects beyond simple accuracy (e.g., fairness, efficiency, temporal and spatial aspects in video generation). This work is crucial for advancing various fields, including machine translation, federated learning, and AI model development, by providing more accurate and informative assessments that ultimately lead to improved model design and deployment.
Papers
October 1, 2024
July 3, 2024
May 13, 2024
May 3, 2024
May 2, 2024
April 23, 2024
January 30, 2024
November 6, 2023
October 3, 2023
August 6, 2023
March 29, 2022