Retrieval Benchmark
Retrieval benchmarks are standardized datasets and evaluation protocols used to assess the performance of information retrieval systems, primarily focusing on the accuracy and efficiency of retrieving relevant information (e.g., text, images, videos) given a query. Current research emphasizes developing more robust benchmarks that address limitations of existing ones, such as biases towards simple queries, neglecting fine-grained details, and overlooking the impact of AI-generated content. This involves creating new benchmarks with diverse data sources and tasks, exploring advanced model architectures like dual encoders, graph neural networks, and diffusion models, and incorporating efficiency metrics alongside accuracy. Improved benchmarks are crucial for advancing research in information retrieval and driving the development of more effective and efficient search systems across various domains.
Papers
Contrastive Video-Language Learning with Fine-grained Frame Sampling
Zixu Wang, Yujie Zhong, Yishu Miao, Lin Ma, Lucia Specia
Fighting FIRe with FIRE: Assessing the Validity of Text-to-Video Retrieval Benchmarks
Pedro Rodriguez, Mahmoud Azab, Becka Silvert, Renato Sanchez, Linzy Labson, Hardik Shah, Seungwhan Moon