Source Alignment
Source alignment, the process of identifying corresponding information units across multiple sources, is crucial for tasks like multi-document summarization and large language model (LLM) training. Current research focuses on improving alignment accuracy at finer granularities (e.g., proposition spans instead of sentences), developing robust evaluation metrics (especially for faithfulness in summarization), and addressing challenges posed by noisy or unreliable sources in LLMs. These advancements are vital for enhancing the reliability and efficiency of various NLP applications, including clinical summarization, machine translation, and ensuring the safety and trustworthiness of LLMs.
Papers
June 2, 2024
April 1, 2024
November 15, 2023
November 12, 2023
May 19, 2023
March 7, 2023
September 2, 2022