Pseudo Relevance Feedback

Pseudo-relevance feedback (PRF) enhances information retrieval by iteratively refining search queries based on user feedback or inferred relevance signals from initial search results. Current research focuses on integrating PRF with large language models (LLMs) and advanced neural architectures like transformers, employing techniques such as query reformulation, ensemble prompting, and contrastive learning to improve the quality and efficiency of feedback incorporation. These advancements aim to bridge the semantic gap between user intent and retrieved information, leading to more accurate and relevant search results across various applications, including web search, question answering, and video retrieval.

Papers