Offline Evaluation
Offline evaluation assesses the performance of algorithms and systems using pre-collected data, aiming to predict real-world performance without costly online testing. Current research focuses on mitigating biases inherent in offline data, particularly popularity bias and confounding factors, and improving the correlation between offline metrics and actual online performance, employing techniques like propensity scoring, importance sampling, and counterfactual analysis. This field is crucial for responsible development and deployment of AI systems across various domains, from recommender systems and autonomous driving to healthcare and network optimization, enabling efficient and reliable evaluation before real-world implementation.
Papers
November 14, 2024
November 2, 2024
October 31, 2024
October 22, 2024
October 15, 2024
September 12, 2024
April 16, 2024
December 13, 2023
October 26, 2023
September 22, 2023
September 8, 2023
August 24, 2023
August 14, 2023
July 27, 2023
May 17, 2023
November 2, 2022
October 29, 2022
September 18, 2022
September 8, 2022
August 16, 2022