Human Centered Evaluation
Human-centered evaluation (HCE) focuses on assessing artificial intelligence (AI) systems by prioritizing human perception, understanding, and experience, rather than solely relying on traditional metrics. Current research emphasizes developing frameworks and methodologies for evaluating AI across diverse applications, including biometric systems, image generation, recommender systems, and explainable AI (XAI), often incorporating user studies and qualitative analysis alongside quantitative measures. This shift towards HCE is crucial for building trustworthy and beneficial AI systems by ensuring alignment with human needs and expectations, ultimately improving the design and deployment of AI in various fields.
Papers
October 28, 2024
September 17, 2024
August 22, 2024
March 1, 2024
February 20, 2024
January 21, 2024
October 11, 2023
August 23, 2023
July 31, 2023
June 12, 2023
March 11, 2023
March 10, 2023
June 23, 2022
May 25, 2022
December 27, 2021