TREC Evaluation
TREC evaluation provides standardized benchmarks for assessing information retrieval (IR) systems, focusing on ranking the relevance of retrieved documents or answers to user queries. Current research emphasizes the use of large language models (LLMs) within these evaluations, exploring novel architectures like listwise ranking and tournament-based approaches to improve ranking accuracy and address challenges such as limited input length and bias. This rigorous evaluation framework is crucial for advancing IR technology, fostering fairer and more effective search engines and question-answering systems across various languages and domains.
Papers
November 13, 2024
November 8, 2024
September 23, 2024
June 17, 2024
April 4, 2024
March 22, 2024
November 16, 2023
May 29, 2023
April 3, 2023
November 21, 2022
September 27, 2022