Abstractive Summary
Abstractive summarization aims to create concise, coherent summaries that paraphrase source text, unlike extractive methods which select sentences directly. Current research focuses on improving the factual accuracy and coherence of abstractive summaries generated by large language models (LLMs), often employing techniques like reinforcement learning, contrastive learning, and knowledge distillation to address issues such as hallucinations and inconsistencies. These advancements are crucial for applications requiring reliable summaries of diverse data types, including user activity, legal documents, and scientific literature, improving information access and analysis across various fields.
Papers
May 4, 2022
April 21, 2022
April 13, 2022
December 22, 2021
December 14, 2021
December 7, 2021
December 2, 2021
November 23, 2021
November 18, 2021