Abstractive Summarization Model
Abstractive summarization models aim to generate concise, coherent summaries that paraphrase the source text, unlike extractive methods which simply select sentences. Current research focuses on improving the factual consistency and reducing hallucinations in these models, often employing techniques like contrastive learning, reward shaping, and auxiliary information integration within architectures such as BART, PEGASUS, and large language models (LLMs). These advancements are crucial for enhancing the reliability and applicability of abstractive summarization across diverse domains, including legal text processing and disaster response, where accurate and trustworthy summaries are paramount.
Papers
September 21, 2022
May 12, 2022
May 4, 2022