Paper ID: 2412.20061 • Published Dec 28, 2024
Comparative Analysis of Listwise Reranking with Large Language Models in Limited-Resource Language Contexts
Yanxin Shen, Lun Wang, Chuanqi Shi, Shaoshuai Du, Yiyi Tao, Yixian Shen, Hang Zhang
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Large Language Models (LLMs) have demonstrated significant effectiveness
across various NLP tasks, including text ranking. This study assesses the
performance of large language models (LLMs) in listwise reranking for
limited-resource African languages. We compare proprietary models RankGPT3.5,
Rank4o-mini, RankGPTo1-mini and RankClaude-sonnet in cross-lingual contexts.
Results indicate that these LLMs significantly outperform traditional baseline
methods such as BM25-DT in most evaluation metrics, particularly in nDCG@10 and
MRR@100. These findings highlight the potential of LLMs in enhancing reranking
tasks for low-resource languages and offer insights into cost-effective
solutions.