Collaborative Evaluation
Collaborative evaluation focuses on improving the assessment of complex systems, particularly AI models, by integrating human expertise with automated methods. Current research emphasizes developing frameworks that combine human judgment with the scalability of large language models or other AI techniques for tasks like evaluating natural language generation or assessing the fairness, usefulness, and reliability of AI in healthcare. This approach aims to address limitations of solely human or automated evaluations, leading to more robust and reliable assessments with implications for improving AI development and deployment across various domains.
Papers
March 19, 2024
February 27, 2024
January 23, 2024
October 30, 2023
October 23, 2023
July 20, 2023
November 28, 2022