Passage Embeddings
Passage embeddings represent text segments as dense vectors, enabling efficient similarity comparisons for tasks like information retrieval and question answering. Current research focuses on improving the efficiency and effectiveness of these embeddings, particularly by leveraging large language models (LLMs) and exploring novel training methods like listwise learning and generation-augmented learning to better capture semantic meaning and contextual information. This work is significant because it addresses limitations in existing retrieval systems, particularly concerning long documents and complex relationships between entities, leading to more accurate and efficient information access.
Papers
August 6, 2024
June 21, 2024
May 23, 2023
December 20, 2022
October 6, 2022
April 22, 2022